The physical network design decisions determine the physical layout and use of VLANs. They also include decisions on jumbo frames and on other network-related requirements such as DNS and NTP.

Physical Network Design Decisions

Routing protocols
Base the selection of the external routing protocol on your current implementation or on available expertise among the IT staff. Consider performance requirements. Possible options are OSPF, BGP, and IS-IS. Although each routing protocol has a complex set of advantages and disadvantages, this validated design utilizes BGP as its routing protocol.
DHCP proxy
Set the DHCP proxy to point to a DHCP server by IPv4 address. See the VMware Validated Design Planning and Preparation document for details on the DHCP server.
Table 1. Design Decisions on the Physical Network

Decision ID

Design Decision

Design Justification

Design Implication


Implement the following physical network architecture:

  • A minimum of one 10-GbE port (one 25 GbE port recommended) on each ToR switch for ESXi host uplinks

  • No EtherChannel (LAG/vPC) configuration for ESXi host uplinks

  • Layer 3 device with BGP and IGMP support

  • Guarantees availability during a switch failure.

  • Provides compatibility with vSphere host profiles because they do not store link-aggregation settings.

  • Supports BGP as the dynamic routing protocol in the SDDC.

  • Provides compatibility with NSX hybrid mode replication because it requires IGMP.

Hardware choices might be limited.

Requires dynamic routing protocol configuration in the physical networking stack.


Use a physical network that is configured for BGP routing adjacency.

This design uses BGP as its routing protocol. Supports flexibility in network design for routing multi-site and multi-tenancy workloads.

Requires BGP configuration in the physical networking stack.


Use two ToR switches for each rack.

This design uses a minimum of two 10-GbE links, with two 25-GbE links recommended, to each ESXi host and provides redundancy and reduces the overall design complexity.

Requires two ToR switches per rack which can increase costs.


Use VLANs to segment physical network functions.

  • Supports physical network connectivity without requiring many NICs.

  • Isolates the different network functions of the SDDC so that you can have differentiated services and prioritized traffic as needed.

Requires uniform configuration and presentation on all the trunks made available to the ESXi hosts.

Additional Design Decisions

Additional design decisions deal with static IP addresses, DNS records, and the required NTP time source.

Table 2. Additional Design Decisions on the Physical Network

Decision ID

Design Decision

Design Justification

Design Implication


Assign static IP addresses to all management components in the SDDC infrastructure except for NSX VTEPs.

NSX VTEPs are assigned by using a DHCP server. Set the lease duration for the VTEP DHCP scope to at least 7 days.

Ensures that interfaces such as management and storage always have the same IP address. In this way, you provide support for continuous management of ESXi hosts using vCenter Server and for provisioning IP storage by storage administrators.

NSX VTEPs do not have an administrative endpoint. As a result, they can use DHCP for automatic IP address assignment. You are also unable to assign directly a static IP address to the VMkernel port of an NSX VTEP. IP pools are an option but the NSX administrator must create them. If you must change or expand the subnet, changing the DHCP scope is simpler than creating an IP pool and assigning it to the ESXi hosts.

In a vSAN stretched configuration, the VLAN ID is the same in both availability zones. If you use IP pools, VTEPs in different availability zones will be unable to communicate. By using DHCP, each availability zone can have a different subnet associated with the same VLAN ID. As a result, the NSX VTEPs can communicate over Layer 3.

Requires precise IP address management.


Create DNS records for all management nodes to enable forward, reverse, short, and FQDN resolution.

Ensures consistent resolution of management nodes using both IP address (reverse lookup) and name resolution.



Use an NTP time source for all management nodes.

It is critical to maintain accurate and synchronized time between management nodes.


Jumbo Frames Design Decisions

IP storage throughput can benefit from the configuration of jumbo frames. Increasing the per-frame payload from 1500 bytes to the jumbo frame setting improves the efficiency of data transfer. Jumbo frames must be configured end-to-end, which is feasible in a LAN environment. When you enable jumbo frames on an ESXi host, you have to select an MTU that matches the MTU of the physical switch ports.

The workload determines whether it makes sense to configure jumbo frames on a virtual machine. If the workload consistently transfers large amounts of network data, configure jumbo frames, if possible. In that case, confirm that both the virtual machine operating system and the virtual machine NICs support jumbo frames.

Using jumbo frames also improves the performance of vSphere vMotion.

Note: VXLAN needs an MTU value of at least 1600 bytes on the switches and routers that carry the transport zone traffic.
Table 3. Design Decisions on Jumbo Frames

Decision ID

Design Decision

Design Justification

Design Implication


Configure the MTU size to at least 9000 bytes (jumbo frames) on the physical switch ports and distributed switch port groups that support the following traffic types.

  • vSAN

  • vMotion


  • vSphere Replication

  • NFS

Improves traffic throughput.

To support VXLAN, the MTU setting must be increased to a minimum of 1600 bytes. Setting the MTU to 9000 bytes has no effect on VXLAN, but provides consistency across port groups that are adjusted from the default MTU size.

When you adjust the MTU packet size, you must also configure the entire network path (VMkernel port, distributed switch, physical switches, and routers) to support the same MTU packet size.