The physical network design includes defining the network topology for connecting physical switches and the ESXi hosts, determining switch port settings for VLANs, and designing routing or services architecture.

Top-of-Rack Physical Switches

When configuring Top-of-Rack (ToR) switches, consider the following best practices:

  • Configure redundant physical switches to enhance availability.

  • Configure switch ports that connect to ESXi hosts manually as trunk ports. Virtual switches are passive devices and do not support trunking protocols such as Dynamic Trunking Protocol (DTP).

  • Modify the Spanning Tree Protocol (STP) on any port that is connected to an ESXi NIC to reduce the time it takes to transition ports over to the forwarding state, for example, using the Trunk PortFast feature on a Cisco physical switch.

  • Configure jumbo frames on all switch ports, Inter-Switch Link (ISL), and Switched Virtual Interfaces (SVIs).

Top-Of-Rack connectivity and Network Settings

Each ESXi host is connected redundantly to the network fabric ToR switches through various physical interfaces. For information about the design and number of these interfaces, see the Network Virtualization Design section.

The ToR switches are configured to provide all necessary VLANs through an 802.1Q trunk. These redundant connections use the features of vSphere Distributed Switch to guarantee that a physical interface is not overrun and redundant paths are used if they are available.

  • Leaf-Spine architecture: Modern data centers are deployed with a leaf-spine based architecture, where each leaf switch is connected to all available spine switches, resulting in an efficient configuration with maximum redundancy and load sharing.

  • Spanning Tree Protocol (STP): Although the recommended design does not use STP, switches usually include STP configured by default. Designate the ports connected to ESXi hosts as trunks and utilize specific features such as trunk portfast to reduce uplink convergence.

  • Trunking: Configure the switch ports as members of an 802.1Q trunk.

  • MTU: Set MTU for all switch ports, VLANs, and SVIs to support jumbo frames for consistency. The host DC is typically configured to an MTU of 9100.

Jumbo Frames

IP storage, network function payload, and other services can benefit from the configuration of jumbo frames. Increasing the per-frame payload from 1500 bytes to a jumbo frame setting improves the efficiency of data transfer. Jumbo frames must be configured end-to-end. When enabling jumbo frames on an ESXi host, select an MTU size that matches the MTU size of the physical switch ports.

The workload determines whether to configure jumbo frames on a VM. If the workload consistently transfers large amounts of network data, configure jumbo frames. Also, ensure that both the VM operating system and the VM NICs support jumbo frames. Jumbo frames also improve the performance of vSphere vMotion.


The vSwitches are configured to support Jumbo Frames (9000 MTU), while the vMotion and vSAN VMKernel ports benefit from increased throughput by using Jumbo Frames. To avoid MTU or MSS mismatches from management components to the ESXi host management interface, leave the Management VMKernel port (VMK0) with the default MTU (1500).

Recommended Physical Network Design

Design Recommendation

Design Justification

Design Implication

Use the Layer 3 Leaf-Spine Data Center Architecture.

You can select layer 3 switches from different vendors for the physical switching fabric.

This approach is cost-effective because it uses only the basic functionality of the physical switches.

VLANs are restricted to a single rack. VLANs can be reused across racks without spanning across domains.

Implement the following physical network architecture:

  • Two physical interfaces, one per NUMA node, on each ToR switch for ESXi host uplinks.

  • Layer 3 device such as a Data Center Gateway with BGP support

  • Guarantees availability during a switch failure

  • Provides compatibility with vSphere host profiles because they do not store link-aggregation settings

  • Supports BGP as the dynamic routing protocol

  • BGP is the only dynamic routing protocol supported by NSX.

Hardware choices might be limited.

Use two ToR switches for each rack.

This design uses multiple physical interfaces per NUMA node on each ESXi host.

Provides redundancy and reduces the overall design complexity.

Two ToR switches per rack can increase costs.

Use VLANs to segment physical network functions.

  • Supports physical network connectivity without requiring many NICs.

  • Isolates different network functions of the Software-Defined Data Center (SDDC) so that you can have differentiated services and prioritized traffic as needed.

Requires uniform configuration and presentation on all the switch ports made available to the ESXi hosts.

Assign static IP addresses to all management components.

Ensures that interfaces such as management and storage always have the same IP address. In this way, you provide support for continuous management of ESXi hosts using vCenter Server and for provisioning IP storage by storage administrators.

Requires precise IP address management.

Create DNS records for all ESXi hosts and management VMs to enable forward, reverse, short, and FQDN resolution.

Ensures consistent resolution of management components using both IP address (reverse lookup) and name resolution.

Adds administrative overhead.

Use an NTP or Precision Time Protocol time source for all management components.

It is critical to maintain accurate and synchronized time between management components.


Configure the MTU size to at least 9000 bytes (jumbo frames) on the physical switch ports, VLANs, SVIs, vSphere Distributed Switches, and VMkernel ports.

Improves traffic throughput.

A minimum of 1700 MTU is required for NSX Data Center deployments.

When you adjust the MTU size, you must also configure the entire network path (VMkernel port, distributed switch, physical switches, and routers) to support the same MTU size.