The adapter configuration process on the ESXi host involves setting up VMkernel binding for a TCP network adapter, and then adding a software adapter for NVMe over TCP. After that, you can add an NVMe controller.

The entire configuration process includes these actions.
Action Description
On your ESXi host, install an adapter that supports NVMe over TCP technology For example, i40en.
Configure VMkernel Binding for the NVMe over TCP Adapter VMkernel binding for NVMe over TCP involves creating a virtual switch and connecting the physical network adapter and the VMkernel adapter to the virtual switch. Through this connection, the TCP adapter becomes bound to the VMkernel adapter. In the configuration, you can use a vSphere standard switch or a vSphere distributed switch.
Add the Software NVMe over TCP Adapter Use the vSphere Client to enable the software storage adapters for NVMe over TCP.
Add Controllers for NVMe over Fabrics Use the vSphere Client to add an NVMe controller. After you add the controller, the NVMe namespaces associated with the controller become available to your ESXi host. The NVMe storage devices that represent the namespaces in the ESXi environment appear on the storage devices list.

Configure VMkernel Binding for the NVMe over TCP Adapter

Port binding for NVMe over TCP involves creating a virtual switch and connecting the physical network adapter and the VMkernel adapter to the virtual switch. Through this connection, the TCP adapter becomes bound to the VMkernel adapter. In the configuration, you can use a vSphere standard switch or a vSphere distributed switch.

The following diagram displays the port binding for the NVMe over TCP adapter.

This graphic displays the port binding for the NVMe over TCP adapter.

For more information about creating switches, see Create a vSphere Standard Switch or Create a vSphere Distributed Switch in the vSphere Networking documentation.

Example of Network Topology with NVMe over TCP

In this example, two vSphere standard switches and two network adapters (vmnic) on the host provide high availability. They connect to two external switches.

Configuration of the network topology for the NVMe over TCP adapter.

Configure VMkernel Binding for the TCP Adapter with a vSphere Standard Switch

You can configure VMkernel binding for the TCP adapter using a vSphere standard switch and one uplink per switch. Configuring the network connection involves creating a virtual VMkernel adapter for each physical network adapter. You use 1:1 mapping between each virtual and physical network adapter.

Procedure

  1. Create a vSphere standard switch with a VMkernel adapter and the network component.
    1. In the vSphere Client, select your host and click the Networks tab.
    2. Click Actions > Add Networking.
    3. Select VMkernel Network Adapter and click NEXT.
    4. Select New standard switch and click NEXT.
    5. Under Assigned adapters, click +.
      The list of available physical adapters is displayed.
    6. Select the required physical adapter vmnic, and click OK.
      Note: Make sure to select the physical network adapter that corresponds to the TCP/IP adapter.
    7. Under VMkernel port settings, enter the required values.
      If you are using VLAN for the storage path, enter the VLAN ID.
    8. In the IP settings list, enter the VMkernel IPv4 settings.
    9. Under Available services, select NVMe over TCP for appropriate tagging.
  2. Verify that your switch is correctly configured.
    1. On the Configure tab, select Virtual switches under Networking.
    2. Expand the switch and verify its configuration.

      The illustration shows that the physical network adapter and the VMkernel adapter are connected to the vSphere standard switch. Through this connection, the TCP adapter is bound to the VMkernel adapter.

      The illustration shows a vSphere standard switch that connects the physical network adapter and the VMkernel adapter.

  3. Set NIC teaming policies for vSphere standard switch.
    Note: The NVMe over TCP adapter does not support such NIC teaming features as failover and load balancing. Instead, it relies on Storage Multipathing for these functionalities. However, if you must configure NIC teaming for other network workloads on the uplink serving the NVMe over TCP adapter, follow these steps.
    1. Click the Configure tab, and select Virtual switches under Networking.
    2. Select the appropriate VMkernel adapter.
    3. From the right-click menu, click Edit Settings.
    4. Select Teaming and Failover.
    5. Under Active adapters, move the required physical adapter vmnic.
    6. Under Standby adapters > Failover order, move the other physcial adapters.
    7. Set appropriate load balancing and other properties.
    8. Repeat the steps to configure additional VMkernel adapters.
    To verify if the adapter is configured, click the Configure tab and select VMkernel adapters.

Configure VMkernel Binding for the TCP Adapter with a vSphere Distributed Switch

You can configure VMkernel port binding for the TCP adapter using a vSphere distributed switch and one uplink per switch. Configuring the network connection involves creating a virtual VMkernel adapter for each physical network adapter. You use 1:1 mapping between each virtual and physical network adapter.

Procedure

  1. Create a vSphere distributed switch with a VMkernel adapter and the network component.
    1. In the vSphere Client, select Datacenter, and click the Networks tab.
    2. Click Actions , and select Distributed Switch > New Distributed Switch.
    3. Select a name for the switch.
      Ensure that the location of the data center is present within your host, and click Next.
    4. Select a compatible ESXi version, and click Next.
    5. Enter the required number of uplinks, and click Finish.
  2. Add one or more hosts to your distributed virtual switch.
    1. In the vSphere Client, select Datacenter, and click Distributed Switches.
      A list of available DSwitches appear.
    2. Right-click the DSwitch, and select Add and Manage Hosts from the menu.
    3. Select Add hosts, and click Next.
    4. Select your host, and click Next.
    5. Select Assign uplink.
    6. Enter the relevant uplink to assign the vmnic.
    7. Assign a VMkernel adapter, and click Next.
    8. In the vSphere Client, select the DSwitch, and click the Ports tab.
      You can view the uplinks created for your switch here.
  3. Create distributed port groups for the NVMe over TCP storage path.
    1. In the vSphere Client, select the required DSwitch.
    2. Click Actions and select Distributed Port Group > New Distributed Port Group.
    3. Under Configure Settings, enter the general properties of the port group.
      If you have configured a specific VLAN, add it in the VLAN ID.
      Note: Network connectivity issues might occur if you do not configure VLAN properly.
  4. Configure the VMkernel adapters.
    1. In the vSphere Client, expand the DSwitch list, and select the distributed port group.
    2. Click Actions > Add VMkernel Adapters.
    3. In the Select Member Hosts dialog box, select your host and click OK.
    4. In the Configure VMkernel Adapter dialog box, ensure that the MTU matches to the Switch MTU.
    5. Click Finish.
    6. Repeat step b and step c to add multiple TCP capable NICs.
  5. Set NIC teaming policies for the distributed port groups.
    Note: The NVMe over TCP adapter does not support such NIC teaming features as failover and load balancing. Instead, it relies on Storage Multipathing for these functionalities. However, if you must configure NIC teaming for other network workloads on the uplink serving the NVMe over TCP adapter, follow these steps.
    1. In the Distributed Port Group, click Actions > Edit Settings.
    2. Click Teaming and Failover, and verify the active uplinks.
    3. Assign one uplink as Active for the port group, and the other uplink as Unused.
      Repeat step c for each of the port groups created.

What to do next

After you complete the configuration, click Configure, and verify whether the physical adapter tab on your host lists the DVSwitch for the NICs selected.

Add Software NVMe over RDMA or NVMe over TCP Adapters

ESXi supports NVMe over RDMA and NVMe over TCP software adapters. Use the vSphere Client to add the software storage adapters for NVMe over RDMA or NVMe over TCP.

Prerequisites

Procedure

  1. In the vSphere Client, navigate to the ESXi host.
  2. Click the Configure tab.
  3. Under Storage, click Storage Adapters, and click the Add Software Adapter icon.
  4. Select the adapter type as required.
    • NVMe over RDMA adapter
    • NVMe over TCP adapter
  5. Depending on your selection in Step 4, select an appropriate RDMA adapter or TCP network adapter (vmnic) from the drop-down menu.
    Note: If you get an error message that prevents you from creating the software adapter, make sure that the VMkernel binding for the adapter is configured correctly. For more information, see Configure VMkernel Binding for the RDMA Adapter and Configure VMkernel Binding for the NVMe over TCP Adapter.

Results

The software NVMe over RDMA and NVMe over TCP adapters appear in the list as vmhba storage adapters. You can remove the adapters if you need to free the underlying RDMA and TCP network adapter for other purposes. See Remove Software NVMe Adapters from the ESXi Host.

Add Controllers for NVMe over Fabrics

Use the vSphere Client to add an NVMe controller. After you add the controller, the NVMe namespaces associated with the controller become available to your ESXi host. The NVMe storage devices that represent the namespaces in the ESXi environment appear on the storage devices list.

Prerequisites

Note: With NVMe over Fibre Channel, after you install the required adapter, it automatically connects to all targets that are reachable at the moment. You can later reconfigure the adapter and disconnect its controllers or connect other controllers that were not available during the host boot.

Procedure

  1. In the vSphere Client, navigate to the ESXi host.
  2. Click the Configure tab.
  3. Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure.
  4. Click the Controllers tab, and click Add Controller.
  5. On the Add controller dialog box, select one of the following discovery methods.
    Option Description
    Automatically This option indicates that your host can discover controllers automatically and accept connection to any available controller.
    1. Specify the following parameters for to discover controllers.
      • For NVMe over RDMA (RoCE v2), the IP address and transport port number.
      • For NVMe over TCP, the IP address, transport port number, and the digest parameter.
    2. Click Discover Controllers.
    3. From the list of controllers, select the controller to use.
    Manually With this method, you manually enter controller details. The host requests a connection to a specific controller using the parameters you specify:
    • Subsystem NQN
    • Target port identification.
      • For NVMe over RDMA (RoCE v2), the IP address and transport port number (optional).
      • For NVMe over TCP, the IP address, transport port number (optional), and the digest parameter (optional).
      • For NVMe over Fibre Channel, the WorldWideNodeName and WorldWidePortName.
    • Admin queue size. An optional parameter that specifies the size of the admin queue of the controller. A default value is 16.
    • Keepalive timeout. An optional parameter to specify in seconds the keep alive timeout between the adapter and the controller. A default timeout value is 60 seconds.
    Note: IO Queue Size and IO Queue Number are optional parameters that can be set only through esxcli.

Results

The controller appears on the list of controllers. Your host can now discover the NVMe namespaces that are associated with the controller. The NVMe storage devices that represent the namespaces in the ESXi environment appear on the storage devices list in the vSphere Client.