VMware Cloud Foundation uses vSphere Distributed Switch for virtual networking.

Logical vSphere Networking Design for VMware Cloud Foundation

When you design vSphere networking, consider the configuration of the vSphere Distributed Switches, distributed port groups, and VMkernel adapters in the VMware Cloud Foundation environment.

vSphere Distributed Switch Design

The default cluster in a workload domain uses a single vSphere Distributed Switch with a configuration for system traffic types, NIC teaming, and MTU size.

VMware Cloud Foundation supports NSX traffic over a single vSphere Distributed Switch per cluster. Additional distributed switches are supported for other traffic types.

When using vSAN ReadyNodes, you must define the number of vSphere Distributed Switches at workload domain deployment time. You cannot add additional vSphere Distributed Switches post deployment.

Table 1. Configuration Options for vSphere Distributed Switch for VMware Cloud Foundation

vSphere Distributed Switch Configuration

Management Domain

VI Workload Domain

Benefits

Drawbacks

Single vSphere Distributed Switch

  • One vSphere Distributed Switch for all traffic

  • vSphere Distributed Switch VDS for all traffic

Requires least number of physical NICs and switch ports.

All traffic shares the same uplinks.

Multiple vSphere Distributed Switches

  • Maximum one vSphere Distributed Switch for NSX traffic and one vSphere Distributed Switch for non-NSX traffic by using the Deployment Parameters Workbook in Cloud Builder.

  • Maximum 1 vSphere Distributed Switch for NSX traffic and 2 vSphere Distributed Switches for non-NSX traffic by using a JSON file in Cloud Builder.

  • Maximum one vSphere Distributed Switch for NSX traffic by using the SDDC Manager UI.

  • Max of 15 additional vSphere Distributed Switches for non-NSX trafficby using the SDDC Manager API.

Provides support for traffic separation across different uplinks or vSphere Distributed Switches.

You must provide additional physical NICs and switch ports.

Note:

To use physical NICs other than vminc0 and vmnic1 as the uplinks for the first vSphere Distributed Switch, you must use a JSON file for management domain deployment in Cloud Builder or use the SDDC Manager API for VI workload domain deployment.

In a deployment that has multiple availability zones, you must provide new or extend the existing networks.

Distributed Port Group Design

VMware Cloud Foundation requires several port groups on the vSphere Distributed Switch for the workload domain. The VMkernel adapter for the host TEP is connected to the host overlay VLAN, but does not require a dedicated port group on the distributed switch. The VMkernel network adapter for host TEP is automatically created when you configure the ESXi host as a transport node.

Table 2. Distributed Port Group Configuration for VMware Cloud Foundation

Function

Teaming Policy

Number of Physical NIC Ports

MTU Size (Bytes)

  • Management

  • vSphere vMotion

  • vSAN

Route based on physical NIC load

  • Failover Detection: Link status only

  • Failback: Yes

    Occurs only on saturation of the active uplink.

  • Notify Switches: Yes

  • Maximum four ports for the management domain by using the Deployment Parameters Workbook in Cloud Builder.

  • Maximum two ports for a VI workload domain by using the SDDC Manager UI.

  • For all domain types, maximum six ports by using the API.

  • 1700 minimum

  • 9000 recommended

  • Host Overlay

Not applicable.

  • Edge Uplinks and Overlay

Use explicit failover order.

  • Edge RTEP (NSX Federation Only)

Not applicable.

VMkernel Network Adapter Design

The VMkernel networking layer provides connectivity to hosts and handles the system traffic for management, vSphere vMotion, vSphere HA, vSAN, and others.

Table 3. Default VMkernel Adapters for a Workload Domain per Availability Zone

VMkernel Adapter Service

Connected Port Group

Activated Services

Recommended MTU Size (Bytes)

Management

Management Port Group

Management Traffic

1500 (Default)

vMotion

vMotion Port Group

vMotion Traffic

9000

vSAN

vSAN Port Group

vSAN

9000

Host TEPs

Not applicable

Not applicable

9000

vSphere Networking Design Recommendations for VMware Cloud Foundation

Consider the recommendations for vSphere networking in VMware Cloud Foundation, such as MTU size, port binding, teaming policy and traffic-specific network shares.

Table 4. vSphere Networking Design Recommendations for VMware Cloud Foundation

Recommendation ID

Design Recommendation

Justification

Implication

VCF-VDS-RCMD-CFG-001

Use a single vSphere Distributed Switch per cluster.

  • Reduces the complexity of the network design.

  • Reduces the size of the fault domain.

Increases the number of vSphere Distributed Switches that must be managed.

VCF-VDS-RCMD-CFG-002

Configure the MTU size of the vSphere Distributed Switch to 9000 for jumbo frames.

  • Supports the MTU size required by system traffic types.

  • Improves traffic throughput.

When adjusting the MTU packet size, you must also configure the entire network path (VMkernel ports, virtual switches, physical switches, and routers) to support the same MTU packet size.

VCF-VDS-RCMD-DPG-001

Use ephemeral port binding for the management port group.

Using ephemeral port binding provides the option for recovery of the vCenter Server instance that is managing the distributed switch.

Port-level permissions and controls are lost across power cycles, and no historical context is saved.

VCF-VDS-RCMD-DPG-002

Use static port binding for all non-management port groups.

Static binding ensures a virtual machine connects to the same port on the vSphere Distributed Switch. This allows for historical data and port level monitoring.

None.

VCF-VDS-RCMD-DPG-003

Use the Route based on physical NIC load teaming algorithm for the management port group.

Reduces the complexity of the network design and increases resiliency and performance.

None.

VCF-VDS-RCMD-DPG-004

Use the Route based on physical NIC load teaming algorithm for the vSphere vMotion port group.

Reduces the complexity of the network design and increases resiliency and performance.

None.

VCF-VDS-RCMD-NIO-001

Enable Network I/O Control on vSphere Distributed Switch of the management domain cluster

Increases resiliency and performance of the network.

If configured incorrectly, Network I/O Control might impact network performance for critical traffic types.

VCF-VDS-RCMD-NIO-002

Set the share value for management traffic to Normal.

By keeping the default setting of Normal, management traffic is prioritized higher than vSphere vMotion but lower than vSAN traffic. Management traffic is important because it ensures that the hosts can still be managed during times of network contention.

None.

VCF-VDS-RCMD-NIO-003

Set the share value for vSphere vMotion traffic to Low.

During times of network contention, vSphere vMotion traffic is not as important as virtual machine or storage traffic.

During times of network contention, vMotion takes longer than usual to complete.

VCF-VDS-RCMD-NIO-004

Set the share value for virtual machines to High.

Virtual machines are the most important asset in the SDDC. Leaving the default setting of High ensures that they always have access to the network resources they need.

None.

VCF-VDS-RCMD-NIO-005

Set the share value for vSAN traffic to High.

During times of network contention, vSAN traffic needs guaranteed bandwidth to support virtual machine performance.

None.

VCF-VDS-RCMD-NIO-006

Set the share value for other traffic types to Low.

By default, VMware Cloud Foundation does not use other traffic types, like vSphere FT traffic. Hence, these traffic types can be set the lowest priority.

None.