Use this design decision list for reference related to the configuration of the physical network in an environment with a single or multiple VMware Cloud Foundation instances. The design also considers if an instance contains a single or multiple availability zones.

For full design details, see Physical Network Infrastructure Design for VMware Cloud Foundation.

Table 1. Leaf-Spine Physical Network Design Requirements for VMware Cloud Foundation

Requirement ID

Design Requirement

Justification

Implication

VCF-NET-REQD-CFG-001

Do not use EtherChannel (LAG, LACP, or vPC) configuration for ESXi host uplinks.

  • Simplifies configuration of top-of-rack switches.

  • Teaming options available with vSphere Distributed Switch provide load balancing and failover.

  • EtherChannel implementations might have vendor-specific limitations.

None.

VCF-NET-REQD-CFG-002

Use VLANs to separate physical network functions.

  • Supports physical network connectivity without requiring many NICs.

  • Isolates the different network functions in the SDDC so that you can have differentiated services and prioritized traffic as needed.

Requires uniform configuration and presentation on all the trunks that are made available to the ESXi hosts.

VCF-NET-REQD-CFG-003

Configure the VLANs as members of a 802.1Q trunk.

All VLANs become available on the same physical network adapters on the ESXi hosts.

Optionally, the management VLAN can act as the native VLAN.

VCF-NET-REQD-CFG-004

Set the MTU size to at least 1,700 bytes (recommended 9,000 bytes for jumbo frames) on the physical switch ports, vSphere Distributed Switches, vSphere Distributed Switch port groups, and N-VDS switches that support the following traffic types:

  • Overlay (Geneve)

  • vSAN

  • vSphere vMotion

  • Improves traffic throughput.

  • Supports Geneve by increasing the MTU size to a minimum of 1,600 bytes.

  • Geneve is an extensible protocol. The MTU size might increase with future capabilities. While 1,600 bytes is sufficient, an MTU size of 1,700 bytes provides more room for increasing the Geneve MTU size without the need to change the MTU size of the physical infrastructure.

When adjusting the MTU packet size, you must also configure the entire network path (VMkernel network adapters, virtual switches, physical switches, and routers) to support the same MTU packet size.

In an environment with multiple availability zones, the MTU must be configured on the entire network path between the zones.

Table 2. Leaf-Spine Physical Network Design Requirements for Multi-Rack Compute VI Workload Domain Cluster for VMware Cloud Foundation

Requirement ID

Design Requirement

Justification

Implication

VCF-NET-L3MR-REQD-CFG-001

For a multi-rack compute VI workload domain cluster, provide separate VLANs per rack for each network function.

  • Host management

  • vSAN

  • vSphere vMotion

  • Host overlay

A Layer 3 leaf-spine fabric has a Layer 3 boundary at the leaf switches in each rack creating a Layer 3 boundary between racks.

Requires additional VLANs for each rack.

VCF-NET-L3MR-REQD-CFG-002

For a multi-rack compute VI workload domain cluster, the subnets for each network per rack must be routable and reachable to the leaf switches in the other racks.

  • Host management

  • vSAN

  • vSphere vMotion

  • Host overlay

Ensures the traffic for each network can flow between racks.

Requires additional physical network configuration to make networks routable between racks.

Table 3. Leaf-Spine Physical Network Design Requirements for Multi-Rack NSX Edge Availability for VMware Cloud Foundation

Requirement ID

Design Requirement

Justification

Implication

VCF-NET-MRE-REQD-CFG-001

For multi-rack NSX Edge availability, in each rack that is dedicated for edge nodes, configure the leaf switches with the following VLANs:

  • VM management

  • Edge Uplink 1

  • Edge Uplink 2

  • Edge overlay

A Layer 3 leaf-spine fabric has a Layer 3 boundary at the leaf switches in each rack creating a Layer 3 boundary between racks.

Requires additional VLANs for each rack.

VCF-NET-MRE-REQD-CFG-002

For multi-rack NSX Edge availability, in each rack that is dedicated for edge nodes, the subnets for the following networks must be routable and reachable to the leaf switches in the other rack.

  • VM management

  • Edge overlay

Ensures the traffic for the edge TEPs network can flow between racks.

Requires additional physical network configuration to ensure networks are routable between racks.

Table 4. Leaf-Spine Physical Network Design Requirements for NSX Federation in VMware Cloud Foundation

Requirement ID

Design Requirement

Justification

Implication

VCF-NET-REQD-CFG-005

Set the MTU size to at least 1,500 bytes (1,700 bytes preferred; 9,000 bytes recommended for jumbo frames) on the components of the physical network between the VMware Cloud Foundation instances for the following traffic types.

  • NSX Edge RTEP

  • Jumbo frames are not required between VMware Cloud Foundation instances. However, increased MTU improves traffic throughput.

  • Increasing the RTEP MTU to 1,700 bytes minimizes fragmentation for standard-size workload packets between VMware Cloud Foundation instances.

When adjusting the MTU packet size, you must also configure the entire network path, that is, virtual interfaces, virtual switches, physical switches, and routers to support the same MTU packet size.

VCF-NET-REQD-CFG-006

Ensure that the latency between VMware Cloud Foundation instances that are connected in an NSX Federation is less than 500 ms.

A latency lower than 500 ms is required for NSX Federation.

None.

VCF-NET-REQD-CFG-007

Provide a routed connection between the NSX Manager clusters in VMware Cloud Foundation instances that are connected in an NSX Federation.

Configuring NSX Federation requires connectivity between the NSX Global Manager instances, NSX Local Manager instances, and NSX Edge clusters.

You must assign unique routable IP addresses for each fault domain.

Table 5. Leaf-Spine Physical Network Design Recommendations for VMware Cloud Foundation

Recommendation ID

Design Recommendation

Justification

Implication

VCF-NET-RCMD-CFG-001

Use two ToR switches for each rack.

Supports the use of two 10-GbE (25-GbE or greater recommended) links to each server, provides redundancy and reduces the overall design complexity.

Requires two ToR switches per rack which might increase costs.

VCF-NET-RCMD-CFG-002

Implement the following physical network architecture:

  • At least one 25-GbE (10-GbE minimum) port on each ToR switch for ESXi host uplinks (Host to ToR).

  • Layer 3 device that supports BGP.

  • Provides availability during a switch failure.

  • Provides support for BGP dynamic routing protocol

  • Might limit the hardware choices.

  • Requires dynamic routing protocol configuration in the physical network.

VCF-NET-RCMD-CFG-003

Use a physical network that is configured for BGP routing adjacency.

  • Supports design flexibility for routing multi-site and multi-tenancy workloads.

  • BGP is the only dynamic routing protocol that is supported for NSX Federation.

  • Supports failover between ECMP Edge uplinks.

Requires BGP configuration in the physical network.

VCF-NET-RCMD-CFG-004

Assign persistent IP configurations for NSX tunnel endpoints (TEPs) that use static IP pools instead of dynamic IP pool addressing.

  • Ensures that endpoints have a persistent TEP IP address.

  • In VMware Cloud Foundation, TEP IP assignment by using static IP pools is recommended for all topologies.

  • This configuration removes any requirement for external DHCP services.

Adding more hosts to the cluster may require the static IP pools to be increased..

VCF-NET-RCMD-CFG-005

Configure the trunk ports connected to ESXi NICs as trunk PortFast.

Reduces the time to transition ports over to the forwarding state.

Although this design does not use the STP, switches usually have STP configured by default.

VCF-NET-RCMD-CFG-006

Configure VRRP, HSRP, or another Layer 3 gateway availability method for these networks.

  • Management

  • Edge overlay

Ensures that the VLANs that are stretched between availability zones are connected to a highly- available gateway. Otherwise, a failure in the Layer 3 gateway will cause disruption in the traffic in the SDN setup.

Requires configuration of a high availability technology for the Layer 3 gateways in the data center.

VCF-NET-RCMD-CFG-007

Use separate VLANs for the network functions for each cluster.

Reduces the size of the Layer 2 broadcast domain to a single vSphere cluster.

Increases the overall number of VLANs that are required for a VMware Cloud Foundation instance.

Table 6. Leaf-Spine Physical Network Design Recommendations for Dedicated Edge Scale and Performance for VMware Cloud Foundation

Recommendation ID

Design Recommendation

Justification

Implication

VCF-NET-DES-RCMD-CFG-001

Implement the following physical network architecture:

  • Two 100-GbE ports on each ToR switch for ESXi host uplinks (Host to ToR).

  • Layer 3 device that supports BGP.

Supports the requirements for high bandwidth and packets per second for large-scale deployments.

Requires 100-GbE network switches.

Table 7. Leaf-Spine Physical Network Design Recommendations for NSX Federation in VMware Cloud Foundation

Recommendation ID

Design Recommendation

Justification

Implication

VCF-NET-RCMD-CFG-008

Provide BGP routing between all VMware Cloud Foundation instances that are connected in an NSX Federation setup.

BGP is the supported routing protocol for NSX Federation.

None.

VCF-NET-RCMD-CFG-009

Ensure that the latency between VMware Cloud Foundation instances that are connected in an NSX Federation is less than 150 ms for workload mobility.

A latency lower than 150 ms is required for the following features:

  • Cross vCenter Server vMotion

None.