Enhanced Data Path (EDP) is a networking stack mode, which when configured provides superior network performance. It is primarily targeted for NFV workloads, which offer performance benefits leverging DPDK capability.
The VDS switch can be configured in the enhanced data path mode only on an ESXi host. Enhanced Data Path also supports traffic flowing through Edge VMs.
In the enhanced data path mode, both traffic modes are supported:
- Overlay traffic
- VLAN traffic
Supported VMkernel NICs
With NSX supporting multiple Enhanced Data Path host switches, the maximum number of VMkernel NICs supported per host is 32.
High-Level Process to Configure Enhanced Data Path
As a network administrator, before creating transport zones supporting VDS in the enhanced data path mode, you must prepare the network with the supported NIC cards and drivers. To improve network performance, you can enable the Load Balanced Source teaming policy to become NUMA node aware.
The high-level steps are as follows:
- Use NIC cards that support the enhanced data path.
See VMware Compatibility Guide to know NIC cards that support enhanced data path.
- On the VMware Compatibility Guide page, from the Systems/Servers drop-down menu, select IO Devices.
- On the IO devices page, in the Product Release Version section, select ESXi <version>.
- In the IO device Type section, select Network.
- In the Features section, Enhanced Datapath - Interrupt Mode or Enhanced Datapath - Poll Mode.
- Click Update and View Results.
- In the search results, you will find the supported NIC cards compatible with the ESXi version you selected.
- Identify the brand for which you want to download the driver and click the Model URL to view and download the driver.
- Download and install the latest NIC drivers from the My VMware page.
- Select the VMware vSphere version.
- Go to Drivers & Tools > Driver CDs.
- Download the NIC drivers.
- To use the host as an Enhanced Data Path host, at least one Enhanced Data Path capable NIC must be available on the system. If there are no Enhanced Data Path capable NICs, the management plane will not allow hosts to be added to Enhanced Data Path transport zones.
- List the Enhanced Data Path driver.
esxcli software vib list | grep -E "i40|ixgben"
- Verify whether the NIC is capable to process Enhanced Data Path traffic.
esxcfg-nics -e
Name Driver ENS Capable ENS Driven MAC Address Description vmnic0 ixgben True False e4:43:4b:7b:d2:e0 Intel(R) Ethernet Controller X550 vmnic1 ixgben True False e4:43:4b:7b:d2:e1 Intel(R) Ethernet Controller X550 vmnic2 ixgben True False e4:43:4b:7b:d2:e2 Intel(R) Ethernet Controller X550 vmnic3 ixgben True False e4:43:4b:7b:d2:e3 Intel(R) Ethernet Controller X550 vmnic4 i40en True False 3c:fd:fe:7c:47:40 Intel(R) Ethernet Controller X710/X557-AT 10GBASE-T vmnic5 i40en True False 3c:fd:fe:7c:47:41 Intel(R) Ethernet Controller X710/X557-AT 10GBASE-T vmnic6 i40en True False 3c:fd:fe:7c:47:42 Intel(R) Ethernet Controller X710/X557-AT 10GBASE-T vmnic7 i40en True False 3c:fd:fe:7c:47:43 Intel(R) Ethernet Controller X710/X557-AT 10GBASE-T
- Install the Enhanced Data Path driver.
esxcli software vib install -v file:///<DriverInstallerURL> --no-sig-check
- Alternately, download the driver to the system and install it.
wget <DriverInstallerURL>
esxcli software vib install -v file:///<DriverInstallerURL> --no-sig-check
- Reboot the host to load the driver. Proceed to the next step.
- To unload the driver, follow these steps:
vmkload_mod -u i40en
ps | grep vmkdevmgr
kill -HUP "$(ps | grep vmkdevmgr | awk {'print $1'})"
ps | grep vmkdevmgrkill -HUP <vmkdevmgrProcessID>
kill -HUP "$(ps | grep vmkdevmgr | awk {'print $1'})"
- To uninstall the Enhanced Data Path driver, esxcli software vib remove --vibname=i40en-ens --force --no-live-install.
Note: Enhanced Data Path transport zones configured for overlay traffic: For a Microsoft Windows virtual machine running VMware tools version prior to version 11.0.0 and vNIC type is VMXNET3, ensure MTU is set to 1500. For a Microsoft Windows virtual machine running vSphere 6.7 U1 and VMware tools version 11.0.0 and later, ensure MTU is set to a value less than 8900. For virtual machines running other supported OSes, ensure that the virtual machine MTU is set to a value less than 8900.
- Create a host transport node. Configure mode in Enhanced Datapath on a VDS switch with logical cores and NUMA nodes.
Load Balanced Source Teaming Policy Mode Aware of NUMA
-
The Latency Sensitivity on VMs is High.
-
The network adapter type used is VMXNET3.
If the NUMA node location of either the VM or the physical NIC is not available, then the Load Balanced Source teaming policy does not consider NUMA awareness to align VMs and NICs.
- The LAG uplink is configured with physical links from multiple NUMA nodes.
- The VM has affinity to multiple NUMA nodes.
- The ESXi host failed to define NUMA information for either VM or physical links.
Enhanced Data Path Support for Applications Requiring Traffic Reliability
NFV workloads might use multi-homing and redundancy features provided by Stream Control Transmission Protocol (SCTP) to increase resiliency and reliability to the traffic running on applications. Multi-homing is the ability to support redundant paths from a source VM to a destination VM.
Depending upon the number of physical NICs available to be used as an uplink for an overlay or a VLAN network, those many redundant network paths are available for a VM to send traffic over to the target VM. The redundant paths are used when the pinned pNIC to a logical switch fails. The enhanced data path switch provides redundant network paths between the hosts.
The high-level tasks are:
- Prepare host as an NSX transport node.
- Prepare VLAN or Overlay Transport Zone with two VDS switches in Enhanced Data Path mode.
- On VDS 1, pin the first physical NIC to the switch.
- On VDS 2, pin the second physical NIC to the switch.
The VDS in enhanced data path mode ensures that if pNIC1 becomes unavailable, then traffic from VM 1 is routed through the redundant path - vNIC 1 → tunnel endpoint 2 → pNIC 2 → VM 2.