Learn about basic concepts and different components of the vSphere Virtual Volumes functionality.

Virtual Volume Objects

Virtual volumes are encapsulations of virtual machine files, virtual disks, and their derivatives.

Virtual volumes are stored natively inside a storage system that is connected to your ESXi hosts through Ethernet or SAN. They are exported as objects by a storage system and are managed entirely by hardware on the storage side. Typically, a unique GUID identifies a virtual volume. Virtual volumes are not preprovisioned, but created automatically when you perform virtual machine management operations. These operations include a VM creation, cloning, and snapshotting. ESXi and vCenter Server associate one or more virtual volumes to a virtual machine.

Types of Virtual Volumes

The system creates the following types of virtual volumes for the core elements that make up the virtual machine:
Data-vVol
A data virtual volume that corresponds directly to each virtual disk .vmdk file. As virtual disk files on traditional datastores, virtual volumes are presented to virtual machines as SCSI or NVMe disks. Data-vVol can be either thick or thin-provisioned.
Config-vVol
A configuration virtual volume, or a home directory, represents a small directory that contains metadata files for a virtual machine. The files include a .vmx file, descriptor files for virtual disks, log files, and so forth. The configuration virtual volume is formatted with a file system. When ESXi uses the SCSI or NVMe protocol to connect to storage, configuration virtual volumes are formatted with VMFS. With NFS protocol, configuration virtual volumes are presented as an NFS directory. Typically, it is thin-provisioned.
Starting with vSphere 7.0 Update 2, partners can increase the config-vVol to above 4 GB. Work with your Virtual Volumes partner on implementing this if it is supported by your partner and applicable to your environment.
vSphere 8.0 Update 2 supports space reclamation for config-vVols that reside on Virtual Volumes datastores accessed through the SCSI or NVMe protocols. For more information, see Reclaim Space on the vSphere Virtual Volumes Datastores
Swap-vVol
Created when a VM is first powered on. It is a virtual volume to hold copies of VM memory pages that cannot be retained in memory. Its size is determined by the VM’s memory size. It is thick-provisioned by default.
Snapshot-vVol
A virtual memory volume to hold the contents of virtual machine memory for a snapshot. Thick-provisioned.
For more information, see vSphere Virtual Volumes Snapshots.
Other
A virtual volume for specific features. For example, a digest virtual volume is created for Content-Based Read Cache (CBRC).

Typically, a VM creates a minimum of three virtual volumes, data-vVol, config-vVol, and swap-vVol. The maximum depends on how many virtual disks and snapshots reside on the VM.

By using different virtual volumes for different VM components, you can apply and manipulate storage policies at the finest granularity level. For example, a virtual volume that contains a virtual disk can have a richer set of services than the virtual volume for the VM boot disk.

Disk Provisioning

The Virtual Volumes functionality supports a concept of thin and thick-provisioned virtual disks. However, from the I/O prospective, implementation and management of thin or thick provisioning by the arrays is transparent to the ESXi host. ESXi offloads to the storage arrays any functions related to thin provisioning.

You select the thin or thick type for your virtual disk at the VM creation time. If your disk is thin and resides on a Virtual Volumes datastore, you cannot change its type later by inflating the disk.

Shared Disks

You can place a shared disk on a Virtual Volumes storage that supports SCSI Persistent Reservations for Virtual Volumes. You can use this disk as a quorum disk and eliminate RDMs in the MSCS clusters. For more information, see the vSphere Resource Management documentation.

Virtual Volumes Storage Providers

A Virtual Volumes storage provider, also called a VASA provider, is a software component that acts as a storage awareness service for vSphere. The provider mediates out-of-band communication between vCenter Server and ESXi hosts on one side and a storage system on the other.

The storage provider is implemented through VMware APIs for Storage Awareness (VASA) and is used to manage all aspects of Virtual Volumes storage.

The storage provider delivers information from the underlying storage container. The storage container capabilities appear in vCenter Server and the vSphere Client. Then, in turn, the storage provider communicates virtual machine storage requirements, which you can define in the form of a storage policy, to the storage layer. This integration process ensures that a virtual volume created in the storage layer meets the requirements outlined in the policy.

Typically, vendors are responsible for supplying storage providers that can integrate with vSphere and provide support to Virtual Volumes. Every storage provider must be certified by VMware and properly deployed. For information about deploying and upgrading the Virtual Volumes storage provider to a version compatible with current ESXi release, contact your storage vendor.

After you deploy the storage provider, you must register it in vCenter Server. See Register Storage Providers for Virtual Volumes. To upgrade your storage providers or for other operations that you can perform, see Manage Storage Providers for vSphere Virtual Volumes.

Virtual Volumes Storage Containers

Unlike traditional block or file based storage, the Virtual Volumes functionality does not require preconfigured storage on a storage side. Instead, Virtual Volumes uses a storage container. It is a pool of raw storage capacity or an aggregation of storage capabilities that a storage system can provide to virtual volumes.

A storage container is a part of the logical storage fabric and is a logical unit of the underlying hardware. The storage container logically groups virtual volumes based on management and administrative needs. For example, the storage container can contain all virtual volumes created for a tenant in a multitenant deployment, or a department in an enterprise deployment. Each storage container serves as a virtual volume store and virtual volumes are allocated out of the storage container capacity.

Typically, a storage administrator on the storage side defines storage containers. The number of storage containers, their capacity, and their size depend on a vendor-specific implementation. At least one container for each storage system is required.

Note: A single storage container cannot span different physical arrays.

After you register a storage provider associated with the storage system, vCenter Server discovers all configured storage containers along with their storage capability profiles, protocol endpoints, and other attributes. A single storage container can export multiple capability profiles. As a result, virtual machines with diverse needs and different storage policy settings can be a part of the same storage container.

Initially, all discovered storage containers are not connected to any specific host, and you cannot see them in the vSphere Client. To mount a storage container, you must map it to a Virtual Volumes datastore.

Static Protocol Endpoints

With SCSI or NFS transports, ESXi hosts use a logical I/O proxy, called the protocol endpoint, to communicate with virtual volumes. ESXi uses protocol endpoints to establish a data path on demand from virtual machines to their respective virtual volumes.

Note: Information in this section applies only to static protocol endpoints that use the SCSI or NFS transports. For specifics about the NVMe protocol endpoints, see NVMe and Virtual Volumes in vSphere.

Each virtual volume is bound to a specific protocol endpoint. When a virtual machine on the host performs an I/O operation, the protocol endpoint directs the I/O to the appropriate virtual volume. Typically, a storage system requires just a few protocol endpoints. A single protocol endpoint can connect to hundreds or thousands of virtual volumes.

On the storage side, a storage administrator configures protocol endpoints, one or several per storage container. The protocol endpoints are a part of the physical storage fabric. The storage system exports the protocol endpoints with associated storage containers through the storage provider. After you map the storage container to a Virtual Volumes datastore, the ESXi host discovers the protocol endpoints and they become visible in the vSphere Client. The protocol endpoints can also be discovered during a storage rescan. Multiple hosts can discover and mount the protocol endpoints.

In the vSphere Client, the list of available protocol endpoints looks similar to the host storage devices list. Different storage transports can be used to expose the protocol endpoints to ESXi. When the SCSI-based transport is used, the protocol endpoint represents a proxy LUN defined by a T10-based LUN WWN. For the NFS protocol, the protocol endpoint is a mount point, such as an IP address and a share name. You can configure multipathing on the SCSI-based protocol endpoint, but not on the NFS-based protocol endpoint. No matter which protocol you use, the storage array can provide multiple protocol endpoints for availability purposes.

Protocol endpoints are managed per array. ESXi and vCenter Server assume that all protocol endpoints reported for an array are associated with all containers on that array. For example, if an array has two containers and three protocol endpoints, ESXi assumes that virtual volumes on both containers can be bound to all three protocol points.

For information about viewing the static protocol endpoints in the vSphere Client, see Review and Manage Static Protocol Endpoints.

Binding and Unbinding Virtual Volumes

At the time of creation, a virtual volume is a passive entity and is not immediately ready for I/O. To access the virtual volume, ESXi or vCenter Server send a bind request.

The storage system replies with a protocol endpoint ID that becomes an access point to the virtual volume. The protocol endpoint accepts all I/O requests to the virtual volume. This binding exists until ESXi sends an unbind request for the virtual volume.

For later bind requests on the same virtual volume, the storage system can return different protocol endpoint IDs.

When you use the NVMe protocol, the bind virtual volume response provides the NVMe subsytem NQN and the namespace ID (nsid) of the namespace virtual volume object. The ESXi host uses this information and resolves this to the ANA group within the subsystem. Corresponding to this ANA group, a virtual protocol endpoint (vPE) is created if it does not exist. It is used to direct all I/O requests to Virtual Volumes.

When receiving concurrent bind requests to a virtual volume from multiple ESXi hosts, the storage system can return the same or different endpoint bindings to each requesting ESXi host. In other words, the storage system can bind different concurrent hosts to the same virtual volume through different endpoints.

The unbind operation removes the I/O access point for the virtual volume. The storage system might unbind the virtual volume from its protocol endpoint immediately, or after a delay, or take some other action. A bound virtual volume cannot be deleted until it is unbound.

Virtual Volumes Datastores

A Virtual Volumes datastore represents a storage container in vCenter Server and the vSphere Client.

After vCenter Server discovers storage containers exported by storage systems, you must mount them as Virtual Volumes datastores. The Virtual Volumes datastores are not formatted in a traditional way like, for example, VMFS datastores. You must still create them because all vSphere functionalities, including FT, HA, DRS, and so on, require the datastore construct to function properly.

You use the datastore creation wizard in the vSphere Client to map a storage container to a Virtual Volumes datastore. The Virtual Volumes datastore that you create corresponds directly to the specific storage container.

From a vSphere administrator prospective, the Virtual Volumes datastore is similar to any other datastore and is used to hold virtual machines. Like other datastores, the Virtual Volumes datastore can be browsed and lists virtual volumes by virtual machine name. Like traditional datastores, the Virtual Volumes datastore supports unmounting and mounting. However, such operations as upgrade and resize are not applicable to the Virtual Volumes datastore. The Virtual Volumes datastore capacity is configurable by the storage administrator outside of vSphere.

You can use the Virtual Volumes datastores with traditional VMFS and NFS datastores and with vSAN.
Note: The size of a virtual volume must be a multiple of 1 MB, with a minimum size of 1 MB. As a result, all virtual disks that you provision on a Virtual Volumes datastore must be an even multiple of 1 MB. If the virtual disk you migrate to the Virtual Volumes datastore is not an even multiple of 1 MB, extend the disk to the nearest even multiple of 1 MB.

To create a Virtual Volumes datastore, see Create a Virtual Volumes Datastore in vSphere Environment.

Virtual Volumes and VM Storage Policies

A virtual machine that runs on a Virtual Volumes datastore requires a VM storage policy.

A VM storage policy is a set of rules that contains placement and quality-of-service requirements for a virtual machine. The policy enforces appropriate placement of the virtual machine within Virtual Volumes storage and guarantees that storage can satisfy virtual machine requirements.

You use the VM Storage Policies interface to create a Virtual Volumes storage policy. When you assign the new policy to the virtual machine, the policy enforces that the Virtual Volumes storage meets the requirements.

Virtual Volumes Default Storage Policy

For Virtual Volumes, VMware provides a default storage policy that contains no rules or storage requirements, called Virtual Volumes No Requirements Policy. This policy is applied to the VM objects when you do not specify another policy for the virtual machine on the Virtual Volumes datastore. With the No Requirements policy, storage arrays can determine the optimum placement for the VM objects.

The default No Requirements policy that VMware provides has the following characteristics:

  • You cannot delete, edit, or clone this policy.
  • The policy is compatible only with the Virtual Volumes datastores.
  • You can create a VM storage policy for Virtual Volumes and designate it as the default.

Virtual Volumes and Storage Protocols

A Virtual Volumes storage system provides protocol endpoints that are discoverable on the physical storage fabric. ESXi hosts use the protocol endpoints to connect to virtual volumes on the storage. Operation of the protocol endpoints depends on storage protocols that expose the endpoints to ESXi hosts.

Virtual Volumes supports NFS version 3 and 4.1, iSCSI, Fibre Channel, FCoE, NVMe over Fibre Channel, and NVMe over TCP.

Irrespective of which storage protocol is used, protocol endpoints provide uniform access to both SAN and NAS storage. A virtual volume, like a file on other traditional datastore, is presented to a virtual machine as a SCSI or an NVMe disk.

Note:

A storage container is dedicated to SAN (SCSI or NVMe) or NAS and cannot be shared across those protocol types. An array can present one storage container with SCSI protocol endpoints and a different container with NFS protocol endpoints. The container cannot use a combination of SCSI, NVMe, and NFS storage access protocols.

Virtual Volumes and SCSI-Based Transports

On disk arrays, Virtual Volumes supports Fibre Channel, FCoE, and iSCSI protocols.

When the SCSI-based protocol is used, the protocol endpoint represents a proxy LUN defined by a T10-based LUN WWN.

As any block-based LUNs, the protocol endpoints are discovered using standard LUN discovery commands. The ESXi host periodically rescans for new devices and asynchronously discovers block‐based protocol endpoints. The protocol endpoint can be accessible by multiple paths. Traffic on these paths follows well‐known path selection policies, as is typical for LUNs.

On SCSI-based disk arrays at VM creation time, ESXi makes a virtual volume and formats it as VMFS. This small virtual volume stores all VM metadata files and is called the config‐vVol. The config‐vVol functions as a VM storage locator for vSphere.

Virtual Volumes on disk arrays supports the same set of SCSI commands as VMFS and use ATS as a locking mechanism.

CHAP Support for iSCSI Endpoints

Virtual Volumes supports Challenge Handshake Access Protocol (CHAP) with iSCSI targets. This support allows ESXi hosts to share CHAP initiator credentials with Virtual Volumes storage providers, also called VASA providers, and for Virtual Volumes storage providers to raise system events notifying vCenter Server of changes to CHAP target credentials on the storage array.

Each ESXi host can have multiple HBAs and each HBA can have properties configured on it. One of these properties is the authentication method that the HBA must use. Authentication is optional, but if implemented, it must be supported by both the initiator and target. CHAP is an authentication method that can be used both directions between initiator and target.

For more information about different CHAP authentication methods, see Selecting CHAP Authentication Method. To configure CHAP on your ESXi host, see Configuring CHAP Parameters for iSCSI or iSER Storage Adapters on ESXi Host.

Virtual Volumes and NFS Transports

With NAS storage, a protocol endpoint is an NFS share that the ESXi host mounts using IP address or DNS name and a share name. Virtual Volumes supports NFS version 3 and 4.1 to access NAS storage. Both IPv4 and IPv6 formats are supported.

No matter which version you use, a storage array can provide multiple protocol endpoints for availability purposes.

In addition, NFS version 4.1 introduces trunking mechanisms that enable load balancing and multipathing.

Virtual Volumes on NAS devices supports the same NFS Remote Procedure Calls (RPCs) that ESXi hosts use when connecting to NFS mount points.

On NAS devices, a config‐vVol is a directory subtree that corresponds to a config‐vVolID. The config‐vVol must support directories and other operations that are necessary for NFS.

Virtual Volumes and NVMe

Virtual Volumes supports NVMe protocols, including NVMe over Fibre Channel and NVMe over TCP. A virtual volume object is mapped to a namespace within an NVMe subsystem. ANA groups within the NVMe subsystem are viewed as virtual protocol endpoints on the ESXi host.

Virtual protocol endpoints are used for path state management as the ANA group state changes. The ESXi host discovers the ANA groups dynamically, as and when required. This means that the virtual protocol endpoint is created only when the ESXi host needs I/O access to a namespace virtual volume within the NVMe subsystem. Config-vVols on NVMe are similar to SCSI that are formatted with VMFS. They are also used to store the VM metadata files.

To configure NVMe with Virtual Volumes on your ESXi host, see NVMe and Virtual Volumes in vSphere.

Virtual Volumes Architecture

An architectural diagram provides an overview of how all components of the Virtual Volumes functionality interact with each other.

The illustration depicts how different components of Virtual Volumes interact.

Watch the video for information about Virtual Volumes architecture.