Learn how to prepare your ESXi environment for NVMe storage. Configuration requirements might change depending on the type of NVMe transport you use. If you use NVMe over RDMA (RoCE v2), in addition to general requirements, you must also configure lossless Ethernet network.

Requirements for NVMe over PCIe

Your ESXi storage environment must include the following components:
  • Local NVMe storage devices.
  • Compatible ESXi host.
  • Hardware NVMe over PCIe adapter. After you install the adapter, your ESXi host detects it and displays in the vSphere Client as a storage adapter (vmhba) with the protocol indicated as PCIe. You do not need to configure the adapter.

Requirements for NVMe over RDMA (RoCE v2)

Requirements for NVMe over Fibre Channel

  • Fibre Channel storage array that supports NVMe. For information, see Using ESXi with Fibre Channel SAN.
  • Compatible ESXi host.
  • Hardware NVMe adapter. Typically, it is a Fibre Channel HBA that supports NVMe. When you install the adapter, your ESXi host detects it and displays in the vSphere Client as a standard Fibre Channel adapter (vmhba) with the storage protocol indicated as NVMe. You do not need to configure the hardware NVMe adapter to use it.
  • NVMe controller. You do not need to configure the controller. After you install the required hardware NVMe adapter, it automatically connects to all targets and controllers that are reachable at the moment. You can later disconnect the controllers or connect other controllers that were not available during the host boot. See Add Controllers for NVMe over Fabrics.

Requirements for NVMe over TCP

VMware NVMe over Fabrics Shared Storage Support

In the ESXi environment, the NVMe storage devices appear similar to SCSI storage devices, and can be used as shared storage. Follow these rules when using the NVMe-oF storage.
  • Do not mix transport types to access the same namespace.
  • Make sure that active paths are presented to the host. The namespaces cannot be registered until the active path is discovered.
Shared Storage Functionality SCSI over Fabric Storage NVMe over Fabric Storage
RDM Supported Not supported
Core dump Supported Supported
SCSI-2 reservations Supported Not supported
Clustered VMDK Supported Supported
Shared VMDK with multi-writer flag Supported Supported

In vSphere 7.0 Update 1 and later.

For more information, see the Knowledge Base article.

Virtual Volumes Supported Supported

In vSphere 8.0 and later.

For more information, see NVMe and Virtual Volumes in vSphere.

Hardware acceleration with VAAI plug-ins Supported Not supported
Default MPP NMP HPP (NVMe-oF targets cannot be claimed by NMP)

Configuring Lossless Ethernet for NVMe over RDMA

NVMe over RDMA in ESXi requires lossless Ethernet network.

To establish lossless networks, you can select one of the available QoS settings.

Enable Global Pause Flow Control

In this network configuration, ensure global pause flow control is enabled on the network switch ports. Also, ensure that RDMA capable NICs in the host auto-negotiate to the correct flow control automatically.

To check flow control, run the following command:

#esxcli network nic get -n vmnicX
   Pause RX: true
   Pause TX: true

If the above command options are not set to true, run the following command.

#esxcli network nic pauseParams set -r true -t true -n vmnicX

Enable Priority Flow Control

For RoCE traffic to be lossless, you must configure the PFC priority value to 3 in the physical switch and hosts. You can configure the PFC in the ESXi host in two ways:
  • Automatic Configuration. Apply DCB PFC configuration automatically on the host RNIC, if the RNIC driver supports DCB and DCBx.

    You can verify the current DCB settings by running the following command:

    #esxcli network nic dcb status get -n vmnicX
  • Manual configuration. In some cases, the RNIC drivers provide a method to manually configure the DCB PFC using driver specific parameters. To use this method, see vendor specific driver documentation. For example, in Mellanox ConnectX-4/5 driver, you can set the PFC priority value to three by running the following command and then rebooting the host.
    #esxcli system module parameters set -m nmlx5_core -p "pfctx=0x08 pfcrx=0x08"

Enable DSCP based PFC

DSCP based PFC is another way to configure lossless network. In physical switches and hosts, you must set the DSCP value to 26. To use this option, see vendor specific driver documentation. For example, in Mellanox ConnectX-4/5 driver, you can set the DSCP tag value to 26 by running the following commands.
  1. Enable PFC and DSCP trust mode.
    #esxcli system module parameters set -m nmlx5_core -p "pfctx=0x08 pfcrx=0x08 trust_state=2"
    
  2. Set DSCP value to 26.
    #esxcli system module parameters set -m nmlx5_rdma -p "dscp_force=26"
  3. Verify parameters to check to confirm if settings are correct and set.
    esxcli system module parameters list -m nmlx5_core | grep 'trust_state\|pfcrx\|pfctx'
  4. Reboot the host.