VMware Cloud Foundation uses vSphere Distributed Switch for virtual networking.
Logical vSphere Networking Design for VMware Cloud Foundation
When you design vSphere networking, consider the configuration of the vSphere Distributed Switches, distributed port groups, and VMkernel adapters in the VMware Cloud Foundation environment.
vSphere Distributed Switch Design
The default cluster in a workload domain uses a single vSphere Distributed Switch with a configuration for system traffic types, NIC teaming, and MTU size.
VMware Cloud Foundation supports NSX traffic over a single vSphere Distributed Switch per cluster. Additional distributed switches are supported for other traffic types.
When using vSAN ReadyNodes, you must define the number of vSphere Distributed Switches at workload domain deployment time. You cannot add additional vSphere Distributed Switches post deployment.
vSphere Distributed Switch Configuration |
Management Domain |
VI Workload Domain |
Benefits |
Drawbacks |
---|---|---|---|---|
Single vSphere Distributed Switch |
|
|
Requires least number of physical NICs and switch ports. |
All traffic shares the same uplinks. |
Multiple vSphere Distributed Switches |
|
|
Provides support for traffic separation across different uplinks or vSphere Distributed Switches. |
You must provide additional physical NICs and switch ports. |
To use physical NICs other than vminc0 and vmnic1 as the uplinks for the first vSphere Distributed Switch, you must use a JSON file for management domain deployment in Cloud Builder or use the SDDC Manager API for VI workload domain deployment.
In a deployment that has multiple availability zones, you must provide new or extend the existing networks.
Distributed Port Group Design
VMware Cloud Foundation requires several port groups on the vSphere Distributed Switch for the workload domain. The VMkernel adapter for the host TEP is connected to the host overlay VLAN, but does not require a dedicated port group on the distributed switch. The VMkernel network adapter for host TEP is automatically created when you configure the ESXi host as a transport node.
Function |
Teaming Policy |
Number of Physical NIC Ports |
MTU Size (Bytes) |
---|---|---|---|
|
Route based on physical NIC load
|
|
|
|
Not applicable. |
||
|
Use explicit failover order. |
||
|
Not applicable. |
VMkernel Network Adapter Design
The VMkernel networking layer provides connectivity to hosts and handles the system traffic for management, vSphere vMotion, vSphere HA, vSAN, and others.
VMkernel Adapter Service |
Connected Port Group |
Activated Services |
Recommended MTU Size (Bytes) |
---|---|---|---|
Management |
Management Port Group |
Management Traffic |
1500 (Default) |
vMotion |
vMotion Port Group |
vMotion Traffic |
9000 |
vSAN |
vSAN Port Group |
vSAN |
9000 |
Host TEPs |
Not applicable |
Not applicable |
9000 |
vSphere Networking Design Recommendations for VMware Cloud Foundation
Consider the recommendations for vSphere networking in VMware Cloud Foundation, such as MTU size, port binding, teaming policy and traffic-specific network shares.
Recommendation ID |
Design Recommendation |
Justification |
Implication |
---|---|---|---|
VCF-VDS-RCMD-CFG-001 |
Use a single vSphere Distributed Switch per cluster. |
|
Increases the number of vSphere Distributed Switches that must be managed. |
VCF-VDS-RCMD-CFG-002 |
Configure the MTU size of the vSphere Distributed Switch to 9000 for jumbo frames. |
|
When adjusting the MTU packet size, you must also configure the entire network path (VMkernel ports, virtual switches, physical switches, and routers) to support the same MTU packet size. |
VCF-VDS-RCMD-DPG-001 |
Use ephemeral port binding for the management port group. |
Using ephemeral port binding provides the option for recovery of the vCenter Server instance that is managing the distributed switch. |
Port-level permissions and controls are lost across power cycles, and no historical context is saved. |
VCF-VDS-RCMD-DPG-002 |
Use static port binding for all non-management port groups. |
Static binding ensures a virtual machine connects to the same port on the vSphere Distributed Switch. This allows for historical data and port level monitoring. |
None. |
VCF-VDS-RCMD-DPG-003 |
Use the |
Reduces the complexity of the network design and increases resiliency and performance. |
None. |
VCF-VDS-RCMD-DPG-004 |
Use the |
Reduces the complexity of the network design and increases resiliency and performance. |
None. |
VCF-VDS-RCMD-NIO-001 |
Enable Network I/O Control on vSphere Distributed Switch of the management domain cluster |
Increases resiliency and performance of the network. |
If configured incorrectly, Network I/O Control might impact network performance for critical traffic types. |
VCF-VDS-RCMD-NIO-002 |
Set the share value for management traffic to Normal. |
By keeping the default setting of Normal, management traffic is prioritized higher than vSphere vMotion but lower than vSAN traffic. Management traffic is important because it ensures that the hosts can still be managed during times of network contention. |
None. |
VCF-VDS-RCMD-NIO-003 |
Set the share value for vSphere vMotion traffic to Low. |
During times of network contention, vSphere vMotion traffic is not as important as virtual machine or storage traffic. |
During times of network contention, vMotion takes longer than usual to complete. |
VCF-VDS-RCMD-NIO-004 |
Set the share value for virtual machines to High. |
Virtual machines are the most important asset in the SDDC. Leaving the default setting of High ensures that they always have access to the network resources they need. |
None. |
VCF-VDS-RCMD-NIO-005 |
Set the share value for vSAN traffic to High. |
During times of network contention, vSAN traffic needs guaranteed bandwidth to support virtual machine performance. |
None. |
VCF-VDS-RCMD-NIO-006 |
Set the share value for other traffic types to Low. |
By default, VMware Cloud Foundation does not use other traffic types, like vSphere FT traffic. Hence, these traffic types can be set the lowest priority. |
None. |