You can configure VMkernel port binding for the TCP adapter using a vSphere distributed switch and one uplink per switch. Configuring the network connection involves creating a virtual VMkernel adapter for each physical network adapter. You use 1:1 mapping between each virtual and physical network adapter.

Procedure

  1. Create a vSphere distributed switch with a VMkernel adapter and the network component.
    1. In the vSphere Client, select Datacenter, and click the Networks tab.
    2. Click Actions , and select Distributed Switch > New Distributed Switch.
    3. Select a name for the switch.
      Ensure that the location of the data center is present within your host, and click Next.
    4. Select the ESXi version as ESXi 7.0 and later, and click Next.
    5. Enter the required number of uplinks, and click Finish.
  2. Add one or more hosts to your distributed virtual switch.
    1. In the vSphere Client, select Datacenter, and click Distributed Switches..
      A list of available DSwitches appear.
    2. Right-click the DSwitch, and select Add and Manage Hosts from the menu.
    3. Select Add hosts, and click Next.
    4. Select your host, and click Next.
    5. Select Assign uplink.
    6. Enter the relevant uplink to assign the vmnic.
    7. Assign a VMkernel adapter, and click Next.
    8. In the vSphere Client, select the DSwitch, and click the Ports tab.
      You can view the uplinks created for your switch here.
  3. Create distributed port groups for the NVMe over TCP storage path.
    1. In the vSphere Client, select the required DSwitch.
    2. Click Actions and select Distributed Port Group > New Distributed Port Group.
    3. Under Configure Settings, enter the general properties of the port group.
      If you have configured a specific VLAN, add it in the VLAN ID.
      Note: Network connectivity issues might occur if you do not configure VLAN properly.
  4. Configure the VMkernel adapters.
    1. In the vSphere Client, expand the DSwitch list, and select the distributed port group.
    2. Click Actions > Add VMkernel Adapters.
    3. In the Select Member Hosts dialog box, select your host and click OK.
    4. In the Configure VMkernel Adapter dialog box, ensure that the MTU matches to the Switch MTU.
    5. Click Finish.
    6. Repeat step b and step c to add multiple TCP capable NICs.
  5. Set NIC teaming policies for the distributed port groups.
    Note: The NVMe/TCP adapter does not support such NIC teaming features as failover and load balancing. Instead, it relies on Storage Multipathing for these functionalities. However, if you must configure NIC teaming for other network workloads on the uplink serving the NVMe/TCP adapter, follow these steps.
    1. In the Distributed Port Group, click Actions > Edit Settings.
    2. Click Teaming and Failover, and verify the active uplinks.
    3. Assign one uplink as Active for the port group, and the other uplink as Unused.
      Repeat step c for each of the port groups created.

What to do next

After you complete the configuration, click Configure, and verify whether the physical adapter tab on your host lists the DVSwitch for the NICs selected.