VMware Cloud Foundation uses vSphere Distributed Switch for virtual networking.
Logical vSphere Networking Design for VMware Cloud Foundation
When you design vSphere networking, consider the configuration of the vSphere Distributed Switches, distributed port groups, and VMkernel adapters in the VMware Cloud Foundation environment.
vSphere Distributed Switch Design
The default cluster in a workload domain uses a single vSphere Distributed Switch with a configuration for system traffic types, NIC teaming, and MTU size.
VMware Cloud Foundation supports NSX Overlay traffic over a single vSphere Distributed Switch per cluster. Additional distributed switches are supported for other traffic types.
When using vSAN ReadyNodes, you must define the number of vSphere Distributed Switches at workload domain deployment time. You cannot add additional vSphere Distributed Switches post deployment.
vSphere Distributed Switch Configuration |
Management Domain Options |
VI Workload Domain Options |
Benefits |
Drawbacks |
---|---|---|---|---|
Single vSphere Distributed Switch for hosts with two physical NICs |
|
|
Requires the least number of physical NICs and switch ports. |
All traffic shares the same two uplinks. |
Single vSphere Distributed Switch for hosts with four or six physical NICs |
|
|
|
|
Multiple vSphere Distributed Switches |
|
|
|
|
Distributed Port Group Design
Function |
Teaming Policy |
Configuration |
---|---|---|
|
Route based on physical NIC load. |
Recommended. |
|
Recommended. |
|
|
Not applicable. |
Not applicable. |
|
Use explicit failover order. |
Required. |
|
Not applicable. |
Not applicable. |
VMkernel Network Adapter Design
The VMkernel networking layer provides connectivity to hosts and handles the system traffic for management, vSphere vMotion, vSphere HA, vSAN, NFS and others.
VMkernel Adapter Service |
Connected Port Group |
Activated Services |
Recommended MTU Size (Bytes) |
---|---|---|---|
Management |
Management Port Group |
Management Traffic |
1500 (Default) |
vMotion |
vMotion Port Group |
vMotion Traffic |
9000 |
vSAN |
vSAN Port Group |
vSAN |
9000 |
NFS |
NFS Port Group |
NFS |
9000 |
Host TEPs |
Not applicable |
Not applicable |
9000 |
vSphere Distributed Switch Data Path Modes
vSphere Distributed Switch supports three data path modes - Standard Datapath, Enhanced Datapath Interrupt, and Enhanced Datapath. Data path is a networking stack mode that is configured on a vSphere Distributed Switch when an NSX Transport Node Profile is applied during the installation of NSX on an ESXi cluster. Each data path mode has performance characteristics that are suitable for a particular workload running on the cluster. The following table provides details of the various modes available in VMware Cloud Foundation, along with recommended cluster workload types for each mode.
Data Path Mode Name |
Description |
Use Cases |
Requirements |
---|---|---|---|
Standard |
|
Compute workload domains or clusters |
The driver-firmware combination must be on VMware Compatibility Guide for I/O Devices and must support the following features:
|
Enhanced Datapath Interrupt (Referred to as Enhanced Datapath - Standard in NSX Manager UI) |
|
vSphere clusters running NSX Edge nodes |
The driver-firmware combination must be on VMware Compatibility Guide for I/O Devices with Enhanced Data Path - Interrupt mode support |
Enhanced Datapath (Referred to Enhanced Datapath - Performance in NSX Manager UI) |
|
Telco or NFV workloads |
The driver-firmware combination must be on the VMware Compatibility Guide for I/O Devices with Enhanced data path - Poll mode support. |
vSphere Networking Design Requirements and Recommendations for VMware Cloud Foundation
Consider the requirements and recommendations for vSphere networking in VMware Cloud Foundation, such as distributed port group configuration, MTU size, port binding, teaming policy, and traffic-specific network shares.
vSphere Networking Design Requirements for VMware Cloud Foundation
You must meet the following design requirements in your vSphere networking design for VMware Cloud Foundation.
Requirement ID |
Design Requirement |
Justification |
Implication |
---|---|---|---|
VCF-VDS-L3MR-REQD-CFG-001 |
For each rack, create a port group on the vSphere Distributed Switch for the cluster for for the following traffic types:
|
Required for using separate VLANs per rack. |
None. |
vSphere Networking Design Recommendations for VMware Cloud Foundation
In your vSphere networking design for VMware Cloud Foundation, you can apply certain best practices for vSphere Distributed Switch and distributed port groups.
Recommendation ID |
Design Recommendation |
Justification |
Implication |
---|---|---|---|
VCF-VDS-RCMD-CFG-001 |
Use a single vSphere Distributed Switch per cluster. |
|
Reduces the number of vSphere Distributed Switches that must be managed per cluster. |
VCF-VDS-RCMD-CFG-002 |
Do not share a vSphere Distributed Switch across clusters. |
|
For multiple clusters, you manage more vSphere Distributed Switches . |
VCF-VDS-RCMD-CFG-003 |
Configure the MTU size of the vSphere Distributed Switch to 9000 for jumbo frames. |
|
When adjusting the MTU packet size, you must also configure the entire network path (VMkernel ports, virtual switches, physical switches, and routers) to support the same MTU packet size. |
VCF-VDS-RCMD-DPG-001 |
Use ephemeral port binding for the VM management port group. |
Using ephemeral port binding provides the option for recovery of the vCenter Server instance that is managing the distributed switch. The VM management network is not required for a multi-rack compute-only cluster in a VI workload domain. |
Port-level permissions and controls are lost across power cycles, and no historical context is saved. |
VCF-VDS-RCMD-DPG-002 |
Use static port binding for all non-management port groups. |
Static binding ensures a virtual machine connects to the same port on the vSphere Distributed Switch. This configuration provides support for historical data and port-level monitoring. |
None. |
VCF-VDS-RCMD-DPG-003 |
|
Reduces the complexity of the network design, increases resiliency, and can adjust to fluctuating workloads. |
None. |
VCF-VDS-RCMD-DPG-004 |
Use the |
Reduces the complexity of the network design, increases resiliency, and can adjust to fluctuating workloads. |
None. |
VCF-VDS-RCMD-DPG-005 |
Use the |
Reduces the complexity of the network design, increases resiliency, and can adjust to fluctuating workloads. |
None. |
VCF-VDS-RCMD-DPG-006 |
Use the |
Reduces the complexity of the network design, increases resiliency, and can adjust to fluctuating workloads. |
None. |
VCF-VDS-RCMD-NIO-001 |
Enable Network I/O Control on the vSphere Distributed Switch of the management domain cluster. Do not enable Network I/O Control on dedicated vSphere clusters for NSX Edge nodes. |
Increases resiliency and performance of the network. |
Network I/O Control might impact network performance for critical traffic types if misconfigured. |
VCF-VDS-RCMD-NIO-002 |
Set the share value for management traffic to Normal. |
By keeping the default setting of Normal, management traffic is prioritized higher than vSphere vMotion but lower than vSAN traffic. Management traffic is important because it ensures that the hosts can still be managed during times of network contention. |
None. |
VCF-VDS-RCMD-NIO-003 |
Set the share value for vSphere vMotion traffic to Low. |
During times of network contention, vSphere vMotion traffic is not as important as virtual machine or storage traffic. |
During times of network contention, vMotion takes longer than usual to complete. |
VCF-VDS-RCMD-NIO-004 |
Set the share value for virtual machines to High. |
Virtual machines are the most important asset in the SDDC. Leaving the default setting of High ensures that they always have access to the network resources they need. |
None. |
VCF-VDS-RCMD-NIO-005 |
Set the share value for vSAN traffic to High. |
During times of network contention, vSAN traffic needs guaranteed bandwidth to support virtual machine performance. |
None. |
VCF-VDS-RCMD-NIO-006 |
Set the share value for other traffic types to Low. |
By default, VMware Cloud Foundation does not use other traffic types, like vSphere FT traffic. Hence, these traffic types can be set the lowest priority. |
None. |