Learn how to prepare your ESXi environment for NVMe storage. Configuration requirements might change depending on the type of NVMe transport you use. If you use NVMe over RDMA (RoCE v2), in addition to general requirements, you must also configure lossless Ethernet network.
Requirements for NVMe over PCIe
- Local NVMe storage devices.
- Compatible ESXi host.
- Hardware NVMe over PCIe adapter. After you install the adapter, your ESXi host detects it and displays in the vSphere Client as a storage adapter (vmhba) with the protocol indicated as PCIe. You do not need to configure the adapter.
Requirements for NVMe over RDMA (RoCE v2)
- NVMe storage array with NVMe over RDMA (RoCE v2) transport support.
- Compatible ESXi host.
- Ethernet switches supporting a lossless network.
- Network adapter that supports RDMA over Converged Ethernet (RoCE v2). To configure the adapter, see Configuring NVMe over RDMA (RoCE v2) on ESXi.
- Software NVMe over RDMA adapter. This software component must be enabled on your ESXi host and connected to an appropriate network RDMA adapter. For information, see Add Software NVMe over RDMA or NVMe over TCP Adapters.
- NVMe controller. You must add a controller after you configure the software NVMe over RDMA adapter. See Add Controllers for NVMe over Fabrics.
- Lossless Ethernet. See Configuring Lossless Ethernet for NVMe over RDMA.
Requirements for NVMe over Fibre Channel
- Fibre Channel storage array that supports NVMe. For information, see Using ESXi with Fibre Channel SAN.
- Compatible ESXi host.
- Hardware NVMe adapter. Typically, it is a Fibre Channel HBA that supports NVMe. When you install the adapter, your ESXi host detects it and displays in the vSphere Client as a standard Fibre Channel adapter (vmhba) with the storage protocol indicated as NVMe. You do not need to configure the hardware NVMe adapter to use it.
- NVMe controller. You do not need to configure the controller. After you install the required hardware NVMe adapter, it automatically connects to all targets and controllers that are reachable at the moment. You can later disconnect the controllers or connect other controllers that were not available during the host boot. See Add Controllers for NVMe over Fabrics.
Requirements for NVMe over TCP
- NVMe storage array with NVMe over TCP transport support.
- Compatible ESXi host.
- An Ethernet adapter. To configure the adapter, see Configuring NVMe over TCP on ESXi.
- Software NVMe over TCP adapter. This software component must be enabled on your ESXi host and connected to an appropriate network adapter. For more information, see Add Software NVMe over RDMA or NVMe over TCP Adapters.
- NVMe controller. You must add a controller after you configure the software NVMe over TCP adapter. See Add Controllers for NVMe over Fabrics.
VMware NVMe over Fabrics Shared Storage Support
- Do not mix transport types to access the same namespace.
- Make sure that active paths are presented to the host. The namespaces cannot be registered until the active path is discovered.
Shared Storage Functionality | SCSI over Fabric Storage | NVMe over Fabric Storage |
---|---|---|
RDM | Supported | Not supported |
Core dump | Supported | Supported |
SCSI-2 reservations | Supported | Not supported |
Clustered VMDK | Supported | Supported |
Shared VMDK with multi-writer flag | Supported | Supported In vSphere 7.0 Update 1 and later. For more information, see the Knowledge Base article. |
Virtual Volumes | Supported | Supported In vSphere 8.0 and later. For more information, see NVMe and Virtual Volumes in vSphere. |
Hardware acceleration with VAAI plug-ins | Supported | Not supported |
Default MPP | NMP | HPP (NVMe-oF targets cannot be claimed by NMP) |
Configuring Lossless Ethernet for NVMe over RDMA
NVMe over RDMA in ESXi requires lossless Ethernet network.
To establish lossless networks, you can select one of the available QoS settings.
Enable Global Pause Flow Control
In this network configuration, ensure global pause flow control is enabled on the network switch ports. Also, ensure that RDMA capable NICs in the host auto-negotiate to the correct flow control automatically.
To check flow control, run the following command:
#esxcli network nic get -n vmnicX Pause RX: true Pause TX: true
If the above command options are not set to true, run the following command.
#esxcli network nic pauseParams set -r true -t true -n vmnicX
Enable Priority Flow Control
- Automatic Configuration. Apply DCB PFC configuration automatically on the host RNIC, if the RNIC driver supports DCB and DCBx.
You can verify the current DCB settings by running the following command:
#esxcli network nic dcb status get -n vmnicX
- Manual configuration. In some cases, the RNIC drivers provide a method to manually configure the DCB PFC using driver specific parameters. To use this method, see vendor specific driver documentation. For example, in Mellanox ConnectX-4/5 driver, you can set the PFC priority value to three by running the following command and then rebooting the host.
#esxcli system module parameters set -m nmlx5_core -p "pfctx=0x08 pfcrx=0x08"
Enable DSCP based PFC
- Enable PFC and DSCP trust mode.
#esxcli system module parameters set -m nmlx5_core -p "pfctx=0x08 pfcrx=0x08 trust_state=2"
- Set DSCP value to 26.
#esxcli system module parameters set -m nmlx5_rdma -p "dscp_force=26"
- Verify parameters to check to confirm if settings are correct and set.
esxcli system module parameters list -m nmlx5_core | grep 'trust_state\|pfcrx\|pfctx'
- Reboot the host.