Use this design decision list for reference related to the configuration of the physical network in an environment with a single or multiple VMware Cloud Foundation instances. The design also considers if an instance contains a single or multiple availability zones.
For full design details, see Physical Network Infrastructure Design for VMware Cloud Foundation.
Requirement ID |
Design Requirement |
Justification |
Implication |
---|---|---|---|
VCF-NET-REQD-CFG-001 |
Do not use EtherChannel (LAG, LACP, or vPC) configuration for ESXi host uplinks. |
|
None. |
VCF-NET-REQD-CFG-002 |
Use VLANs to separate physical network functions. |
|
Requires uniform configuration and presentation on all the trunks that are made available to the ESXi hosts. |
VCF-NET-REQD-CFG-003 |
Configure the VLANs as members of a 802.1Q trunk. |
All VLANs become available on the same physical network adapters on the ESXi hosts. |
Optionally, the management VLAN can act as the native VLAN. |
VCF-NET-REQD-CFG-004 |
Set the MTU size to at least 1,700 bytes (recommended 9,000 bytes for jumbo frames) on the physical switch ports, vSphere Distributed Switches, vSphere Distributed Switch port groups, and N-VDS switches that support the following traffic types:
|
|
When adjusting the MTU packet size, you must also configure the entire network path (VMkernel network adapters, virtual switches, physical switches, and routers) to support the same MTU packet size. In an environment with multiple availability zones, the MTU must be configured on the entire network path between the zones. |
Requirement ID |
Design Requirement |
Justification |
Implication |
---|---|---|---|
VCF-NET-REQD-CFG-005 |
Set the MTU size to at least 1,500 bytes (1,700 bytes preferred; 9,000 bytes recommended for jumbo frames) on the components of the physical network between the VMware Cloud Foundation instances for the following traffic types.
|
|
When adjusting the MTU packet size, you must also configure the entire network path, that is, virtual interfaces, virtual switches, physical switches, and routers to support the same MTU packet size. |
VCF-NET-REQD-CFG-006 |
Ensure that the latency between VMware Cloud Foundation instances that are connected in an NSX Federation is less than 150 ms. |
A latency lower than 150 ms is required for the following features:
|
None. |
VCF-NET-REQD-CFG-007 |
Provide a routed connection between the NSX Manager clusters in VMware Cloud Foundation instances that are connected in an NSX Federation. |
Configuring NSX Federation requires connectivity between the NSX Global Manager instances, NSX Local Manager instances, and NSX Edge clusters. |
You must assign unique routable IP addresses for each fault domain. |
Recommendation ID |
Design Recommendation |
Justification |
Implication |
---|---|---|---|
VCF-NET-RCMD-CFG-001 |
Use two ToR switches for each rack. |
Supports the use of two 10-GbE (25-GbE or greater recommended) links to each server, provides redundancy and reduces the overall design complexity. |
Requires two ToR switches per rack which might increase costs. |
VCF-NET-RCMD-CFG-002 |
Implement the following physical network architecture:
|
|
|
VCF-NET-RCMD-CFG-003 |
Use a physical network that is configured for BGP routing adjacency. |
|
Requires BGP configuration in the physical network. |
VCF-NET-RCMD-CFG-004 |
Assign persistent IP configurations for NSX tunnel endpoints (TEPs) that use dynamic IP allocation rather than static IP pool addressing. |
|
Requires a DHCP server availability on the TEP VLAN. |
VCF-NET-RCMD-CFG-005 |
Set the lease duration for the DHCP scope for the host overlay network to at least 7 days. |
The IP addresses of the host overlay VMkernel ports are assigned by using a DHCP server.
|
Requires configuration and management of a DHCP server. |
VCF-NET-RCMD-CFG-006 |
Designate the trunk ports connected to ESXi NICs as trunk PortFast. |
Reduces the time to transition ports over to the forwarding state. |
Although this design does not use the STP, switches usually have STP configured by default. |
VCF-NET-RCMD-CFG-007 |
Configure VRRP, HSRP, or another Layer 3 gateway availability method for these networks.
|
Ensures that the VLANs that are stretched between availability zones are connected to a highly- available gateway. Otherwise, a failure in the Layer 3 gateway will cause disruption in the traffic in the SDN setup. |
Requires configuration of a high availability technology for the Layer 3 gateways in the data center. |
Recommendation ID |
Design Recommendation |
Justification |
Implication |
---|---|---|---|
VCF-NET-RCMD-CFG-008 |
Provide high availability for the DHCP server or DHCP Helper used for VLANs requiring DHCP services that are stretched between availability zones. |
Ensures that a failover operation of a single availability zone will not impact DHCP availability. |
Requires configuration of highly available DHCP server. |
Recommendation ID |
Design Recommendation |
Justification |
Implication |
---|---|---|---|
VCF-NET-RCMD-CFG-009 |
Provide BGP routing between all VMware Cloud Foundation instances that are connect in an NSX Federation setup. |
BGP is the supported routing protocol for NSX Federation. |
None. |