When you use the NVMe technology with VMware, follow specific guidelines and requirements.
Requirements for NVMe over PCIe
Your
ESXi storage environment must include the following components:
- Local NVMe storage devices.
- Compatible ESXi host.
- Hardware NVMe over PCIe adapter. After you install the adapter, your ESXi host detects it and displays in the vSphere Client as a storage adapter (vmhba) with the protocol indicated as PCIe. You do not need to configure the adapter.
Requirements for NVMe over RDMA (RoCE v2)
- NVMe storage array with NVMe over RDMA (RoCE v2) transport support.
- Compatible ESXi host.
- Ethernet switches supporting a lossless network.
- Network adapter that supports RDMA over Converged Ethernet (RoCE v2). To configure the adapter, see View RDMA Network Adapters.
- Software NVMe over RDMA adapter. This software component must be enabled on your ESXi host and connected to an appropriate network RDMA adapter. For information, see Enable NVMe over RDMA or NVMe over TCP Software Adapters.
- NVMe controller. You must add a controller after you configure a software NVMe over RDMA adapter. See Add Controller for NVMe over Fabrics.
Requirements for NVMe over Fibre Channel
- Fibre Channel storage array that supports NVMe. For information, see Using ESXi with Fibre Channel SAN.
- Compatible ESXi host.
- Hardware NVMe adapter. Typically, it is a Fibre Channel HBA that supports NVMe. When you install the adapter, your ESXi host detects it and displays in the vSphere Client as a standard Fibre Channel adapter (vmhba) with the storage protocol indicated as NVMe. You do not need to configure the hardware NVMe adapter to use it.
- NVMe controller. You do not need to configure the controller. After you install the required hardware NVMe adapter, it automatically connects to all targets and controllers that are reachable at the moment. You can later disconnect the controllers or connect other controllers that were not available during the host boot. See Add Controller for NVMe over Fabrics.
Requirements for NVMe over TCP
- NVMe storage array with NVMe over TCP transport support.
- Compatible ESXi host.
- An ethernet adapter.
- Software NVMe over TCP adapter. This software component must be enabled on your ESXi host and connected to an appropriate network adapter. For more information, see Enable NVMe over RDMA or NVMe over TCP Software Adapters.
- NVMe controller. You must add a controller after you configure a software NVMe over TCP adapter. See Add Controller for NVMe over Fabrics.
VMware NVMe over Fabrics Shared Storage Support
In the
ESXi environment, the NVMe storage devices appear similar to SCSI storage devices, and can be used as shared storage. Follow these rules when using the NVMe-oF storage.
- Do not mix transport types to access the same namespace.
- Make sure that active paths are presented to the host. The namespaces cannot be registered until the active path is discovered.
Shared Storage Functionality | SCSI over Fabric Storage | NVMe over Fabric Storage |
---|---|---|
RDM | Supported | Not supported |
Core dump | Supported | Not supported |
SCSI-2 reservations | Supported | Not supported |
Clustered VMDK | Supported | Not supported |
Shared VMDK with multi-writer flag | Supported | Supported In vSphere 7.0 Update 1 and later. For more information, see the Knowledge Base article. |
Virtual Volumes | Supported | Not supported |
Hardware acceleration with VAAI plug-ins | Supported | Not supported |
Default MPP | NMP | HPP (NVMe-oF targets cannot be claimed by NMP) |
Limits | LUNs=1024, Paths=4096 | Namespaces=32, Paths=128 (maximum 4 paths per namespace in a host) |