Enhanced data path is a networking stack mode, which when configured provides superior network performance. It is primarily targeted for NFV workloads, which requires the performance benefits provided by this mode.
The N-VDS switch can be configured in the enhanced data path mode only on an ESXi host. ENS also supports traffic flowing through Edge VMs.
In the enhanced data path mode, you can configure:
- Overlay traffic
- VLAN traffic
Supported VMkernel NICs
With NSX-T Data Center supporting multiple ENS host switches, the maximum number of VMkernel NICs supported per host is 32.
High-Level Process to Configure Enhanced Data Path
As a network administrator, before creating transport zones supporting N-VDS in the enhanced data path mode, you must prepare the network with the supported NIC cards and drivers. To improve network performance, you can enable the Load Balanced Source teaming policy to become NUMA node aware.
The high-level steps are as follows:
- Use NIC cards that support the enhanced data path.
See VMware Compatibility Guide to know NIC cards that support enhanced data path.
On the VMware Compatibility Guide page, under the IO devices category, select ESXi 6.7, IO device Type as Network, and feature as N-VDS Enhanced Datapath.
- Download and install the latest NIC drivers from the My VMware page.
- Go to Drivers & Tools > Driver CDs.
- Download NIC drivers:
VMware ESXi 6.7 ixgben-ens 1.1.3 NIC Driver for Intel Ethernet Controllers 82599, x520, x540, x550, and x552 family
VMware ESXi 6.7 i40en-ens 1.1.3 NIC Driver for Intel Ethernet Controllers X710, XL710, XXV710, and X722 family
- Create an uplink policy.
- Create a transport zone with N-VDS in the enhanced data path mode. Note: ENS transport zones configured for overlay traffic: For a Microsoft Windows virtual machine running VMware tools version earlier to version 11.0.0 and vNIC type is VMXNET3, ensure MTU is set to 1500. For a Microsoft Windows virtual machine running vSphere 6.7 U1 and VMware tools version 11.0.0 and later, ensure MTU is set to a value less than 8900. For virtual machines running other supported OSes, ensure the virtual machine MTU is set to a value less than 8900.
- Create a host transport node. Configure the enhanced data path N-VDS with logical cores and NUMA nodes.
Load Balanced Source Teaming Policy Mode Aware of NUMA
The Latency Sensitivity on VMs is High.
The network adapter type used is VMXNET3.
If the NUMA node location of either the VM or the physical NIC is not available, then the Load Balanced Source teaming policy does not consider NUMA awareness to align VMs and NICs.
- The LAG uplink is configured with physical links from multiple NUMA nodes.
- The VM has affinity to multiple NUMA nodes.
- The ESXi host failed to define NUMA information for either VM or physical links.
ENS Support for SCTP Applications
In SCTP environments, NFV workloads use multi-homing and redundancy features to increase resiliency and reliability to the traffic running on applications. Multi-homing is the ability to support redundant paths from a source VM to a destination VM.
Depending upon the number of physical NICs available to be used as an uplink for an overlay or VLAN network, those many redundant network paths are available for a VM to send traffic over to the target VM. The redundant paths are used when the pinned pNIC to a logical switch fails. So, traffic routed over SCTP protocol is provided redundant network paths by the Enhanced Data Path N-VDS.
The high-level tasks are:
- Prepare host as an NSX-T Data Center transport node.
- Prepare VLAN or Overlay Transport Zone with two N-VDS switches in Enhanced Data Path mode.
- On N-VDS 1, pin the first physical NIC to the switch.
- On N-VDS 2, pin the second physical NIC to the switch.
The N-VDS in enhanced data path mode ensures that if pNIC1 becomes unavailable, then traffic from VM 1 is routed through the redundant path - vNIC 1 → tunnel endpoint 2 → pNIC 2 → VM 2. Note that vNIC1 of VM 1 and VM 2 are on one subnet. Similarly, vNIC2 of VM 1 and VM 2 are on another subnet.