The physical network design includes defining the network topology for connecting physical switches and the ESXi hosts, determining switch port settings for VLANs, and designing routing.

Top-of-Rack Physical Switches

When configuring Top-of-Rack (ToR) switches, consider the following best practices:

  • Configure redundant physical switches to enhance availability.

  • Configure switch ports that connect to ESXi hosts manually as trunk ports. Virtual switches are passive devices and do not support trunking protocols, such as Dynamic Trunking Protocol (DTP).

  • Modify the Spanning Tree Protocol (STP) on any port that is connected to an ESXi NIC to reduce the time it takes to transition ports over to the forwarding state, for example, using the Trunk PortFast feature on a Cisco physical switch.

  • Configure jumbo frames on all switch ports, Inter-Switch Link (ISL), and Switched Virtual Interfaces (SVIs).

Top-of-Rack Connectivity and Network Settings

Each ESXi host is connected redundantly to the network fabric ToR switches by using a minimum of two 10 GbE ports (25 GbE or faster ports are recommended). Configure the ToR switches to provide all necessary VLANs through an 802.1Q trunk. These redundant connections use the features of vSphere Distributed Switch to guarantee that the physical interface is not overrun, and redundant paths are used if they are available.

  • Spanning Tree Protocol (STP): Although this design does not use the STP, switches usually include STP configured by default. Designate the ports connected to ESXi hosts as trunk PortFast.

  • Trunking: Configure the switch ports as members of an 802.1Q trunk.

  • MTU: Set MTU for all switch ports, VLANs, and SVIs to jumbo frames for consistency.

Jumbo Frames

IP storage throughput can benefit from the configuration of jumbo frames. Increasing the per-frame payload from 1500 bytes to the jumbo frame setting improves the efficiency of data transfer. Jumbo frames must be configured end-to-end. When you enable jumbo frames on an ESXi host, select an MTU size that matches the MTU size of the physical switch ports.

The workload determines whether to configure jumbo frames on a VM. If the workload consistently transfers large amounts of network data, configure jumbo frames, if possible. Also, ensure that both the VM operating system and the VM NICs support jumbo frames. Jumbo frames also improve the performance of vSphere vMotion.

Table 1. Recommended Physical Network Design

Design Decision

Design Justification

Design Implication

Use a layer 3 transport

  • You can select layer 3 switches from different vendors for the physical switching fabric.

  • You can mix switches from different vendors because of the general interoperability between the implementation of routing protocols.

  • This approach is cost-effective because it uses only the basic functionality of the physical switches.

VLANs are restricted to a single rack.

Implement the following physical network architecture:

  • A minimum of one 10 GbE port (one 25 GbE port recommended) on each ToR switch for ESXi host uplinks.

  • Recommended not to use EtherChannel (LAG/vPC) configuration for ESXi host uplinks.

  • Layer 3 device with BGP support.

  • Guarantees availability during a switch failure.

  • Provides compatibility with vSphere host profiles because they do not store link-aggregation settings.

  • Supports BGP as the dynamic routing protocol.

  • BGP is the only dynamic routing protocol supported by NSX-T Data Center.

Hardware choices might be limited.

Use two ToR switches for each rack.

  • This design uses a minimum of two 10 GbE links, with two 25 GbE links recommended to each ESXi host.

  • Provides redundancy and reduces the overall design complexity.

Two ToR switches per rack can increase costs.

Use VLANs to segment physical network functions.

  • Supports physical network connectivity without requiring many NICs.

  • Isolates different network functions of the Software-Defined Data Center (SDDC) so that you can have differentiated services and prioritized traffic as needed.

Requires uniform configuration and presentation on all the switch ports made available to the ESXi hosts.

Assign static IP addresses to all management components.

Ensures that interfaces such as management and storage always have the same IP address. This way, you provide support for continuous management of ESXi hosts using vCenter Server and for provisioning IP storage by storage administrators

Requires precise IP address management.

Create DNS records for all ESXi hosts and management VMs to enable forward, reverse, short, and FQDN resolution.

Ensures consistent resolution of management components using both IP address (reverse lookup) and name resolution.

Adds administrative overhead.

Use an NTP time source for all management components.

It is critical to maintain accurate and synchronized time between management components.

None.

Configure the MTU size to at least 9000 bytes (jumbo frames) on the physical switch ports, VLANs, SVIs, vSphere Distributed Switches, NSX-T N-VDS, and VMkernel ports.

Improves traffic throughput.

When you adjust the MTU size, you must also configure the entire network path (VMkernel port, distributed switch, physical switches, and routers) to support the same MTU size.