The Enhanced mode of the N-VDS switch uses DPDK and vertical NUMA alignment to accelerate workloads. For the telco workloads, the enhanced mode of the N-VDS switch is used for the data plane traffic.

Prerequisites

  1. Download the drivers for the NIC cards using the VMware Compatibility Guide.

    Note:

    For example, if you have Intel x550 cards on your resource pod hosts, update the firmware to the version specified in the compatibility guide, required for the N-VDS Enhanced mode.

  2. Install the VMware Installation Bundle (VIB) downloaded in Step 1 by running one of the following commands:

    • esxcli software vib install -v file:///vmfs/volumes/<datastore>/fileName.vib --no-sig-check
    • esxcli software vib install -d file:///vmfs/volumes/<datastore>/fileName-offline bundle

Procedure

  1. Check which VIB versions are installed on the resource pod hosts. Example:
    #esxcli software vib list | grep -i ens
    nvmxnet3-ens                   2.0.0.22-1vmw.703.0.20.19193900        VMW     VMwareCertified   2022-01-28
  2. Configure NSX-T components.
    1. Transport Zones (Enhanced data path Overlay and VLAN). See Create Transport Zones.
    2. IP Pools for TEP. See Create an IP Pool for Tunnel End Point IP Addresses.
    3. Transport Node Profile. See Add a Transport Node Profile.
    4. Uplink profiles. See Create Uplink Profiles.
  3. Configure NSX for Resource Cluster ESXi hosts. See Configure a Managed Host Transport Node.
  4. Deploy and configure the NSX-T Edge Transport node. See Create an NSX Edge Transport Node.
    1. Configure the NSX-T Edge Cluster. See Create an NSX Edge Cluster.
  5. (Optional) Bare Metal Installation and Configuration.
  6. Configure NSX-T Logical Segments. See NSX-T Data Center Administration Guide.
    1. Create NSX-T segments.
    2. Create and Configure Logical Routers.
      1. Create Tier-0 Logical Router and configure the uplink interface.

      2. Configure BGP peering with TOR switch and enable Route Redistribution.

      3. Create Tier-1 Logical Router and attach it to Tier-0 Logical Router.

      4. Configure Downlink interfaces on Tier-1 logical router for the Logical switches.

      5. Configure Route Redistribution on the Tier-1 router.

What to do next

On the ESXi hosts, verify that the outputs are similar to the following:

#esxcfg-nics -e | egrep "Driver|True"
vmnic6  i40en       True          True          True          False         3c:fd:fe:8b:d9:a0 Intel(R) Ethernet Controller X710/X557-AT 10GBASE-T vmnic7  i40en       True          True          True          False         3c:fd:fe:8b:d9:a1 Intel(R) Ethernet Controller X710/X557-AT 10GBASE-T vmnic8  i40en       True          False         True          False         3c:fd:fe:8b:d9:a2 Intel(R) Ethernet Controller X710/X557-AT 10GBASE-T vmnic9  i40en       True          False         True          False         3c:fd:fe:8b:d9:a3 Intel(R) Ethernet Controller X710/X557-AT 10GBASE-T
#nsxdp-cli ens port list
portID      ensPID TxQ RxQ hwMAC             numMACs  type         Queue Placement(tx|rx) ------------------------------------------------------------------------------ 100663322   0      1   1   00:50:56:60:db:ae 0        GENERIC      4 |4 100663323   1      1   1   00:50:56:63:76:f3 0        GENERIC      2 |2 100663328   2      1   1   00:50:56:65:8b:25 0        GENERIC      5 |5 2248146962  3      8   8   00:00:00:00:00:00 0        UPLINK       0 1 2 3 4 5 6 - |4 1 2 3 4 5 6 0 2248146960  4      8   8   00:00:00:00:00:00 0        UPLINK       0 1 2 3 4 5 6 - |4 0 1 2 3 4 5 6 100663359   6      2   2   00:50:56:01:00:16 0        VNIC         0 0 |0 0 100663341   7      2   2   00:50:56:89:fb:a3 0        VNIC         0 0 |0 0 100663350   8      2   2   00:50:56:89:dc:b5 0        VNIC         0 0 |0 0 100663360   9      1   1   00:50:56:89:d9:26 0        GENERIC      0 |0