When you use the NVMe technology with VMware, follow specific guidelines and requirements.

Requirements for NVMe over PCIe

Your ESXi storage environment must include the following components:
  • Local NVMe storage devices.
  • Compatible ESXi host.
  • Hardware NVMe over PCIe adapter. After you install the adapter, your ESXi host detects it and displays in the vSphere Client as a storage adapter (vmhba) with the protocol indicated as PCIe. You do not need to configure the adapter.

Requirements for NVMe over RDMA (RoCE v2)

Requirements for NVMe over Fibre Channel

  • Fibre Channel storage array that supports NVMe. For information, see Using ESXi with Fibre Channel SAN.
  • Compatible ESXi host.
  • Hardware NVMe adapter. Typically, it is a Fibre Channel HBA that supports NVMe. When you install the adapter, your ESXi host detects it and displays in the vSphere Client as a standard Fibre Channel adapter (vmhba) with the storage protocol indicated as NVMe. You do not need to configure the hardware NVMe adapter to use it.
  • NVMe controller. You do not need to configure the controller. After you install the required hardware NVMe adapter, it automatically connects to all targets and controllers that are reachable at the moment. You can later disconnect the controllers or connect other controllers that were not available during the host boot. See Add Controller for the NVMe over RDMA (RoCE v2) or FC-NVMe Adapter.

VMware NVMe over Fabrics Shared Storage Support

In the ESXi environment, the NVMe storage devices appear similar to SCSI storage devices, and can be used as shared storage. Follow these rules when using the NVMe-oF storage.
  • Do not mix transport types to access the same namespace.
  • Make sure that active paths are presented to the host. The namespaces cannot be registered until the active path is discovered.
Shared Storage Functionality SCSI over Fabric Storage NVMe over Fabric Storage
RDM Supported Not supported
Core dump Supported Not supported
SCSI-2 reservations Supported Not supported
Clustered VMDK Supported Not supported
Shared VMDK with multi-writer flag Supported Supported

In vSphere 7.0 Update 1 and later.

For more information, see the Knowledge Base article.

vVols Supported Not supported
Hardware acceleration with VAAI plug-ins Supported Not supported
Default MPP NMP HPP (NVMe-oF targets cannot be claimed by NMP)
Limits LUNs=1024, Paths=4096 Namespaces=32, Paths=128 (maximum 4 paths per namespace in a host)