Starting with vSphere 8.0, vSphere Distributed Services Engine(vDSE) introduces virtual infrastructure as a distributed architecture with the addition of data processing units (DPUs) also known as SmartNIC that enable offloading infrastructure functions from the host or server CPUs to data processing units (DPUs).

The network offloads compatibility allows you to offload networking operations to the DPU device. You can offload the networking functionality from the ESXi host to DPU for better performance. vSphere Distributed Switch backed by ESXi on DPU supports the following modes:
  • Non-offloading mode before NSX is enabled: The DPU is used as a traditional NIC.
  • Offloading mode after NSX is enabled: Traffic forwarding logic is offloaded from the ESXi host to the vSphere Distributed Switch backed by the DPU.

Hosts backed by DPU are associated with the vSphere Distributed Switch. It is configured during the creation of the distributed switch. Network offloads compatibility cannot be modified after associating hosts to the distributed switch. You can only add hosts that are backed by DPU to these distributed switches. ESXi on DPU is used as a traditional NIC until VMware NSX® transport node is configured. The vSphere Distributed Switch on vCenter Server indicates if network offloading is permitted when VMware NSX® is enabled.

Features supported by vSphere Distributed Switch backed by the DPU.
  • Creation and deletion of the vSphere distributed switch.
  • Configuration management.
  • vSphere distributed switch health check.
  • Link aggregation control protocol (LACP).
  • Port mirroring.
  • Private LANs.
  • Link layer discovery protocol.
Note: Features not supported by vSphere Distributed Switch backed by DPU.
  • Network I/O control.
  • Traffic shaping policies.
  • DV filter.

Dual DPU

Starting vSphere 8.0 Update 3, you can use the vDSE with 2 data processing units (DPUs) on high availability mode. For more information on dual DPUs, see Introducing VMware vSphere® Distributed Services EngineTM and Networking Acceleration by Using DPUs.

Dual DPUs can be consumed in High Availability (HA) and Non High Availability (Non-HA) mode.
  • HA mode: In this mode, each DPU is consumed by a single offloaded Distributed Virtual Switch (vDS). For instance, if DPU 1 is designated as active then the DPU 2 acts as a stand-by. Stand-by DPU is designated as a backup DPU. If the active DPU fails, then the active networking offload is switched to the stand-by DPU. This provides high availability for the DPU. This switchover minimizes the risk of failure on the active workloads.

    If dual DPUs are connected to the same network switch at the same time, only one of them processes data packets. The other DPU is on stand-by mode. However, shadow switches and ports are created on the stand-by DPU. Networking policies are also applied on the DPU but the shadow switch does not process any packets. When ESXi detects an active DPU failure, it initiates a fail over to the stand-by DPU and sends a signal to the shadow switch to enable packet processing.

  • Non-HA mode: In this mode, there is no High Availability (HA), but each DPU is allowed to be consumed by a separate offloaded vDS. This mode allows both the DPUs to be consumed for active networking datapath offload.

Enable Network Offloads

To enable the network offloads you have to perform multiple steps in the vCenter Server and in VMware NSX®.
Step Solution
Create a vSphere Distributed Switch Create a vSphere Distributed Switch
Associate hosts to vSphere Distributed Switch Add Hosts to a vSphere Distributed Switch
Configure NSX host transport node Configure NSX host transport node on DPU-enabled vSphere Lifecycle Manager cluster
View the topology of the vSphere Distributed Switch with Network Offloads View the Topology of Network Offloads Switch