Use this list of requirements and recommendations for reference related to the configuration of the vSphere Distributed Switch instances and VMkernel adapters in a VMware Cloud Foundation environment.

For full design details, see vSphere Networking Design for VMware Cloud Foundation.

Table 1. vSphere Networking Design Requirements for a Multi-Rack Compute VI Workload Domain Cluster for VMware Cloud Foundation

Requirement ID

Design Requirement

Justification

Implication

VCF-VDS-L3MR-REQD-CFG-001

For each rack, create a port group on the vSphere Distributed Switch for the cluster for for the following traffic types:

  • Host management

  • vSAN

  • vSphere vMotion

Required for using separate VLANs per rack.

None.

Table 2. vSphere Networking Design Requirements for Dedicated Edge Scale and Performance for VMware Cloud Foundation

Requirement ID

Design Requirement

Justification

Implication

VCF-VDS-DES-REQD-CFG-001

Configure Enhanced Datapath - Interrupt Mode

Provides the best performance for bandwidth and packets per second for the edge nodes running on the cluster.

The physical NIC must support the Enhanced Datapath - Interrupt Mode feature.

VCF-VDS-DES-REQD-CFG-002

Deactivate Network I/O Control on a vSphere Distributed Switch used for edge VM traffic.

Maximizes the packet per second rate that the edge nodes can achieve.

You must deactivate Network I/O Control manually after the cluster is deployed.

Table 3. vSphere Networking Design Recommendations for VMware Cloud Foundation

Recommendation ID

Design Recommendation

Justification

Implication

VCF-VDS-RCMD-CFG-001

Use a single vSphere Distributed Switch per cluster.

  • Reduces the complexity of the network design.

Reduces the number of vSphere Distributed Switches that must be managed per cluster.

VCF-VDS-RCMD-CFG-002

Do not share a vSphere Distributed Switch across clusters.

  • Enables independent lifecycle management of vSphere Distributed Switch per cluster.

  • Reduces the size of the fault domain.

For multiple clusters, you manage more vSphere Distributed Switches .

VCF-VDS-RCMD-CFG-003

Configure the MTU size of the vSphere Distributed Switch to 9000 for jumbo frames.

  • Supports the MTU size required by system traffic types.

  • Improves traffic throughput.

When adjusting the MTU packet size, you must also configure the entire network path (VMkernel ports, virtual switches, physical switches, and routers) to support the same MTU packet size.

VCF-VDS-RCMD-DPG-001

Use ephemeral port binding for the VM management port group.

Using ephemeral port binding provides the option for recovery of the vCenter Server instance that is managing the distributed switch.

The VM management network is not required for a multi-rack compute-only cluster in a VI workload domain.

Port-level permissions and controls are lost across power cycles, and no historical context is saved.

VCF-VDS-RCMD-DPG-002

Use static port binding for all non-management port groups.

Static binding ensures a virtual machine connects to the same port on the vSphere Distributed Switch. This configuration provides support for historical data and port-level monitoring.

None.

VCF-VDS-RCMD-DPG-003

  • Use the Route based on physical NIC load teaming algorithm for the VM management port group.

  • VM management network is not required for a compute only L3 multi-rack deployment.

Reduces the complexity of the network design, increases resiliency, and can adjust to fluctuating workloads.

None.

VCF-VDS-RCMD-DPG-004

Use the Route based on physical NIC load teaming algorithm for the ESXi management port group.

Reduces the complexity of the network design, increases resiliency, and can adjust to fluctuating workloads.

None.

VCF-VDS-RCMD-DPG-005

Use the Route based on physical NIC load teaming algorithm for the vSphere vMotion port group.

Reduces the complexity of the network design, increases resiliency, and can adjust to fluctuating workloads.

None.

VCF-VDS-RCMD-DPG-006

Use the Route based on physical NIC load teaming algorithm for the vSAN port group.

Reduces the complexity of the network design, increases resiliency, and can adjust to fluctuating workloads.

None.

VCF-VDS-RCMD-NIO-001

Enable Network I/O Control on the vSphere Distributed Switch of the management domain cluster.

Do not enable Network I/O Control on dedicated vSphere clusters for NSX Edge nodes.

Increases resiliency and performance of the network.

Network I/O Control might impact network performance for critical traffic types if misconfigured.

VCF-VDS-RCMD-NIO-002

Set the share value for management traffic to Normal.

By keeping the default setting of Normal, management traffic is prioritized higher than vSphere vMotion but lower than vSAN traffic. Management traffic is important because it ensures that the hosts can still be managed during times of network contention.

None.

VCF-VDS-RCMD-NIO-003

Set the share value for vSphere vMotion traffic to Low.

During times of network contention, vSphere vMotion traffic is not as important as virtual machine or storage traffic.

During times of network contention, vMotion takes longer than usual to complete.

VCF-VDS-RCMD-NIO-004

Set the share value for virtual machines to High.

Virtual machines are the most important asset in the SDDC. Leaving the default setting of High ensures that they always have access to the network resources they need.

None.

VCF-VDS-RCMD-NIO-005

Set the share value for vSAN traffic to High.

During times of network contention, vSAN traffic needs guaranteed bandwidth to support virtual machine performance.

None.

VCF-VDS-RCMD-NIO-006

Set the share value for other traffic types to Low.

By default, VMware Cloud Foundation does not use other traffic types, like vSphere FT traffic. Hence, these traffic types can be set the lowest priority.

None.