The network architecture that is supported for automated SDDC deployment and maintenance determines the options for integrating hosts with multiple physical NICs.
Standard SDDC Architecture
Aligned with VMware best practices, VMware Cloud Foundation can deploy your SDDC on physical servers with two or more physical NICs per server. VMware Cloud Foundation implements the following traffic management configuration:
-
VMware Cloud Foundation uses the first two physical NICs in the server, that is, vmnic0 and vmnic1, for all network traffic , that is, ESXi management, vSphere vMotion, storage (VMware vSAN™ or NFS), network virtualization (VXLAN or Geneve), management applications, and so on.
-
Traffic is isolated in VLANs which terminate at the top of rack switches (ToRs).
-
Load-Based Teaming (LBT), or Route based on physical NIC load, is used for balancing traffic independently of the physical switches. The environment does not use LACP, VPC, or MLAG.
-
Network I/O Control supports resolving situations where several types of traffic compete for common resource.
-
At the physical network card level, VMware Cloud Foundation works with any NIC on the VMware Hardware Compatibility Guide that is supported by your hardware vendor. While 25-Gb NICs are recommended, 10-Gb NICs are also supported. As a result, you can implement a solution that supports the widest range of network hardware with safeguards for availability, traffic isolation, and contention mitigation.
Traffic Types
In an SDDC, several general types of network traffic exist.
- Virtual machine traffic
-
Traffic for management applications and tenant workloads that are running on a host. Virtual machine traffic might be north-south from the SDDC out to your corporate network and beyond, or east-west to other virtual machines or logical networking devices, such as load balancers.
Virtual machines are typically deployed on virtual wire. Virtual wires are logical networks in NSX that are similar to VLANs in a physical data center. However, virtual wires do not require any changes to your data center because they exist within the NSX SDN.
- Overlay traffic
-
VXLAN or Geneve encapsulated traffic in your data center network. Overlay traffic might encapsulate tenant workload traffic in workload domains or management application traffic in the management domain. Overlay traffic consists of UDP packets to or from the VTEP or TEP interface on your host.
- VMkernel traffic
-
Traffic to support the management and operation of the ESXi hosts including ESXi management, vSphere vMotion, and storage traffic (vSAN or NFS). This traffic originates from the hypervisor itself to support management of and operations in the SDDC or the storage needs of management and tenant virtual machines.
vSphere Distributed Switch and N-VDS
NSX for vSphere uses the capabilities of vSphere Distributed Switch by enabling applications across clusters or sites to reside on the same logical network segment without the need to manage or extend that network segment in the physical data center. VXLANs encapsulate these logical network segments and enable routing across data center network boundaries. The entities that exchange VXLAN encapsulated packets are the VTEP VMkernel adapters in NSX for vSphere. vSphere Distributed Switch supports both VLAN- and VXLAN-backed port groups. Although one host can have one or more vSphere Distributed Switches, you can assign a physical NIC only to one vSphere Distributed Switch.
N-VDS in NSX-T Data Center is a functional equivalent of vSphere Distributed Switch. Like the vSphere Distributed Switch, it provides logical networking segmentation across clusters without the need to manage or extend a segment in the physical data center network. N-VDS used Geneve instead of VXLAN, and TEPs as the VMkernel interface instead of VTEPs. Because NSX-T is designed to work with non-vSphere clusters, NSX-T itself is responsible for N-VDS management instead of VMware vCenter Server®.
Like the vSphere Distributed Switch, the N-VDS supports both VLAN or overlay-backed port groups using Geneve encapsulation. Although a host typically has only a single N-VDS, you can map traffic types to individual physical NICs leveraging N-VDS uplink teaming policies. The host might also be connected to a vSphere Distributed Switch, but the vSphere Distributed Switch must use a dedicated physical NIC. You cannot share a physical NIC between a vSphere Distributed Switch and N-VDS.