The adapter configuration process on the ESXi host involves setting up VMkernel binding for an RDMA network adapter, and then adding a software NVMe over RDMA adapter. You can then add an NVMe controller.

The entire configuration process includes these actions.
Action Description
View RDMA Network Adapters On your ESXi host, install a network adapter that supports RDMA (RoCE v2). For example, Mellanox Technologies MT27700 Family ConnectX-4.

After you install the network adapter, use the vSphere Client to review the RDMA adapter and a physical network adapter.

Configure VMkernel Binding for the RDMA Adapter Port binding for NVMe over RDMA involves creating a switch and connecting the physical network adapter and the VMkernel adapter to the switch. Through this connection, the RDMA adapter becomes bound to the VMkernel adapter. In the configuration, you can use a vSphere standard switch or a vSphere distributed switch.
Add the Software NVMe over RDMA Adapter Use the vSphere Client to activate the software storage adapters for NVMe over RDMA.
Add Controllers for NVMe over Fabrics Use the vSphere Client to add an NVMe controller. After you add the controller, the NVMe namespaces associated with the controller become available to your ESXi host. The NVMe storage devices that represent the namespaces in the ESXi environment appear on the storage devices list.

The following video walks you through the steps of configuring NVMe over RDMA adapters.

View RDMA Network Adapters

After you install a network adapter that supports RDMA (RoCE v2) on your ESXi host, use the vSphere Client to review the RDMA adapter and a physical network adapter.

Procedure

  1. On your ESXi host, install an adapter that supports RDMA (RoCE v2), for example, Mellanox Technologies MT27700 Family ConnectX-4.
    The host discovers the adapter and the vSphere Client displays its two components, an RDMA adapter and a physical network adapter.
  2. In the vSphere Client, verify that the RDMA adapter is discovered by your host.
    1. Navigate to the host.
    2. Click the Configure tab.
    3. Under Networking, click RDMA adapters.
      In this example, the RDMA adapter appears on the list as vmrdma0. The Paired Uplink column displays the network component as the vmnic1 physical network adapter.

      The RDMA adapter appears on the list as vmrdma0. The Paired Uplink column displays the network component as vmnic1.

    4. To verify the description of the adapter, select the RDMA adapter from the list, and click the Properties tab.

Configure VMkernel Binding for the RDMA Adapter

Port binding for NVMe over RDMA involves creating a switch and connecting the physical network adapter and the VMkernel adapter to the switch. Through this connection, the RDMA adapter becomes bound to the VMkernel adapter. In the configuration, you can use a vSphere standard switch or a vSphere distributed switch.

The following diagram displays the port binding for the NVMe over RDMA adapter.

This graphic displays the port binding for the NVMe over RDMA adapter.

For more information about creating switches, see Create a vSphere Standard Switch or Create a vSphere Distributed Switch in the vSphere Networking documentation.

Example of Network Topology with NVMe over RDMA

In this example, two vSphere standard switches and two uplinks (RDMA capable NICs) provide high availability. They connect to two controller pairs in two subnets.

HA with Multiple vSwitches and Multiple Uplinks (RNICs)

This graphic displays two vSphere standard switches and two uplinks that provide high availability.

Configure VMkernel Binding with a vSphere Standard Switch

You can configure VMkernel port binding for the RDMA adapter using a vSphere standard switch and one uplink per switch. Configuring the network connection involves creating a virtual VMkernel adapter for each physical network adapter. You use 1:1 mapping between each virtual and physical network adapter.

Procedure

  1. Create a vSphere standard switch with a VMkernel adapter and the network component.
    1. In the vSphere Client, select your host and click the Networks tab.
    2. Click Actions > Add Networking.
    3. Select VMkernel Network Adapter and click NEXT.
    4. Select New standard switch and click NEXT.
    5. Under Assigned adapters, click +.
      The list of available physical adapters is displayed.
    6. Select the required physical adapter vmnic, and click OK.
      Note: Ensure to select the physical network adapter that corresponds to the RDMA adapter. To see the association between the RDMA adapter vmrdma, and the physical network adapter vmnic, see View RDMA Network Adapters.
    7. Under VMkernel port settings, enter the required values.
      If you are using VLAN for the storage path, enter the VLAN ID.
    8. In the IP settings list, enter the VMkernel IPv4 settings.
    9. Under Available services, select NVMe over RDMA.
  2. Verify that your switch is correctly configured.
    1. On the Configure tab, select Virtual switches under Networking.
    2. Expand the switch and verify its configuration.

      The illustration shows that the physical network adapter and the VMkernel adapter are connected to the vSphere standard switch. Through this connection, the RDMA adapter is bound to the VMkernel adapter.

      The illustration displays the configuration of physical network adapter and VMkernel adapter with a vSphere standard switch.

  3. Verify the configuration of the VMkernel binding for the RDMA adapter.
    1. Under Networking list, click RDMA adapters, and select the RDMA adapter from the list.
    2. Click the VMkernel adapters binding tab and verify that the associated VMkernel adapter appears on the page.
      In this example, the vmrdma0 RDMA adapter is paired to the vmnic1 network adapter and is connected to the vmk1 VMkernel adapter.

      The illustration displays the configuration of VMkernel binding for the RDMA adapter.

Configure VMkernel Binding with a vSphere Standard Switch and NIC Teaming

You can configure VMkernel port binding for the RDMA adapter using a vSphere standard switch with the NIC teaming configuration. You can use NIC teaming to achieve network redundancy. You can configure two or more network adapters (NICs) as a team for high availability and load balancing.

Procedure

  1. Create a vSphere standard switch with a VMkernel adapter and the network component with the NIC teaming configuration.
    1. In the vSphere Client, select your host and click Networks tab.
    2. Click Actions > Add Networking.
    3. Select VMkernel Network Adapter and click NEXT.
    4. Select New standard switch and click NEXT.
    5. Under Assigned adapters, click +.
      A list of available physical adapters is displayed.
    6. Select the required physical adapter vmnic, and add it under Active adapters.
    7. Select another physical adapter vmnic, and add it under Unused adapters.
    8. Under VMkernel port settings, enter the required values.
      If you are using VLAN for the storage path, enter the VLAN ID.
    9. In the IP settings list, specify VMkernel IPv4 settings.
    10. Under Available services, select NVMe over RDMA.
    Repeat step 1 to configure an existing standard switch.
  2. Configure your switch for NIC teaming configuration.
    1. Click the Configure tab, and select Virtual switches under Networking.
    2. Select the appropriate VMkernel adapter.
    3. From the right-click menu, click Edit Settings.
    4. Select Teaming and Failover.
    5. Under Active adapters, move the required physical adapter vmnic.
    6. Under Standby adapters > Failover order, move the other physcial adapters.
    7. Set appropriate load balancing and other properties.
    8. Repeat the steps to configure additional VMkernel adapters.
  3. Repeat steps 1 and 2 to add and configure additional set of teamed rnics. To verify if the adapter is configured, click the Configure tab and select VMkernel adapters

Configure VMkernel Binding with a vSphere Distributed Switch

You can configure VMkernel port binding for the RDMA adapter using a vSphere distributed switch and one uplink per switch. Configuring the network connection involves creating a virtual VMkernel adapter for each physical network adapter. You use 1:1 mapping between each virtual and physical network adapter.

Procedure

  1. Create a vSphere distributed switch with a VMkernel adapter and the network component.
    1. In the vSphere Client, select Datacenter, and click the Networks tab.
    2. Click Actions , and select Distributed Switch > New Distributed Switch.
    3. Select a name for the switch.
      Ensure that the location of the data center is present within your host, and click Next.
    4. Select a compatible ESXi version, and click Next.
    5. Enter the required number of uplinks, and click Finish.
  2. Add one or more hosts to your distributed virtual switch.
    1. In the vSphere Client, select Datacenter, and click Distributed Switches..
      A list of available DSwitches appear.
    2. Right-click the DSwitch, and select Add and Manage Hosts from the menu.
    3. Select Add hosts, and click Next.
    4. Select your host, and click Next.
    5. Select Assign uplink.
    6. Enter the relevant uplink to assign the vmnic.
    7. Assign a VMkernel adapter, and click Next.
    8. In the vSphere Client, select the DSwitch, and click the Ports tab.
      You can view the uplinks created for your switch here.
  3. Create distributed port groups for the NVMe over RDMA storage path.
    1. In the vSphere Client, select the required DSwitch.
    2. Click Actions and select Distributed Port Group > New Distributed Port Group.
    3. Under Configure Settings, enter the general properties of the port group.
      If you have configured a specific VLAN, add it in the VLAN ID.
      Note: Network connectivity issues might occur if you do not configure VLAN properly.
  4. Configure the VMkernel adapters.
    1. In the vSphere Client, expand the DSwitch list, and select the distributed port group.
    2. Click Actions > Add VMkernel Adapters.
    3. In the Select Member Hosts dialog box, select your host and click OK.
    4. In the Configure VMkernel Adapter dialog box, ensure that the MTU matches to the Switch MTU.
    5. Under Available services, select NVMe over RDMA for appropriate tagging.
    6. Click Finish.
    7. Repeat step b and step c to add multiple RDMA capable NICs.
  5. Set NIC teaming policies for the distributed port groups.
    1. In the Distributed Port Group, click Actions > Edit Settings.
    2. Click Teaming and Failover, and verify the active uplinks.
    3. Assign one uplink as Active for the port group, and the other uplink as Unused.
      Repeat step c for each of the port groups created.

What to do next

After you complete the configuration, click Configure, and verify whether the physical adapter tab on your host lists the DVSwitch for the NICs selected.

Add Software NVMe over RDMA or NVMe over TCP Adapters

ESXi supports NVMe over RDMA and NVMe over TCP software adapters. Use the vSphere Client to add the software storage adapters for NVMe over RDMA or NVMe over TCP.

Prerequisites

Procedure

  1. In the vSphere Client, navigate to the ESXi host.
  2. Click the Configure tab.
  3. Under Storage, click Storage Adapters, and click the Add Software Adapter icon.
  4. Select the adapter type as required.
    • NVMe over RDMA adapter
    • NVMe over TCP adapter
  5. Depending on your selection in Step 4, select an appropriate RDMA adapter or TCP network adapter (vmnic) from the drop-down menu.
    Note: If you get an error message that prevents you from creating the software adapter, make sure that the VMkernel binding for the adapter is configured correctly. For more information, see Configure VMkernel Binding for the RDMA Adapter and Configure VMkernel Binding for the NVMe over TCP Adapter.

Results

The software NVMe over RDMA and NVMe over TCP adapters appear in the list as vmhba storage adapters. You can remove the adapters if you need to free the underlying RDMA and TCP network adapter for other purposes. See Remove Software NVMe Adapters from the ESXi Host.

Add Controllers for NVMe over Fabrics

Use the vSphere Client to add an NVMe controller. After you add the controller, the NVMe namespaces associated with the controller become available to your ESXi host. The NVMe storage devices that represent the namespaces in the ESXi environment appear on the storage devices list.

Prerequisites

Note: With NVMe over Fibre Channel, after you install the required adapter, it automatically connects to all targets that are reachable at the moment. You can later reconfigure the adapter and disconnect its controllers or connect other controllers that were not available during the host boot.

Procedure

  1. In the vSphere Client, navigate to the ESXi host.
  2. Click the Configure tab.
  3. Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure.
  4. Click the Controllers tab, and click Add Controller.
  5. On the Add controller dialog box, select one of the following discovery methods.
    Option Description
    Automatically This option indicates that your host can discover controllers automatically and accept connection to any available controller.
    1. Specify the following parameters for to discover controllers.
      • For NVMe over RDMA (RoCE v2), the IP address and transport port number.
      • For NVMe over TCP, the IP address, transport port number, and the digest parameter.
    2. Click Discover Controllers.
    3. From the list of controllers, select the controller to use.
    Manually With this method, you manually enter controller details. The host requests a connection to a specific controller using the parameters you specify:
    • Subsystem NQN
    • Target port identification.
      • For NVMe over RDMA (RoCE v2), the IP address and transport port number (optional).
      • For NVMe over TCP, the IP address, transport port number (optional), and the digest parameter (optional).
      • For NVMe over Fibre Channel, the WorldWideNodeName and WorldWidePortName.
    • Admin queue size. An optional parameter that specifies the size of the admin queue of the controller. A default value is 16.
    • Keepalive timeout. An optional parameter to specify in seconds the keep alive timeout between the adapter and the controller. A default timeout value is 60 seconds.
    Note: IO Queue Size and IO Queue Number are optional parameters that can be set only through esxcli.

Results

The controller appears on the list of controllers. Your host can now discover the NVMe namespaces that are associated with the controller. The NVMe storage devices that represent the namespaces in the ESXi environment appear on the storage devices list in the vSphere Client.