Setup of the physical environment requires careful consideration. Follow best practices for physical switches, switch connectivity, VLANs and subnets, and access port settings.

Top of Rack Physical Switches

When configuring Top of Rack (ToR) switches, consider the following best practices.

  • Configure redundant physical switches to enhance availability.

  • Configure switch ports that connect to ESXi hosts manually as trunk ports. Virtual switches are passive devices and do not send or receive trunking protocols, such as Dynamic Trunking Protocol (DTP).

  • Modify the Spanning Tree Protocol (STP) on any port that is connected to an ESXi NIC to reduce the time it takes to transition ports over to the forwarding state, for example using the Trunk PortFast feature found in a Cisco physical switch.

  • Provide DHCP or DHCP Helper capabilities on all VLANs that are used by Management and VXLAN VMkernel ports. This setup simplifies the configuration by using DHCP to assign IP address based on the IP subnet in use.

  • Configure jumbo frames on all switch ports, inter-switch link (ISL) and switched virtual interfaces (SVI's).

Top of Rack Connectivity and Network Settings

Each ESXi host is connected redundantly to the SDDC network fabric ToR switches by means of two 10 GbE ports. Configure the ToR switches to provide all necessary VLANs via an 802.1Q trunk. These redundant connections are not part of an ether-channel (LAG/vPC) but use features in the vSphere Distributed Switch and NSX for vSphere to guarantee no physical interface is overrun and redundant paths are used as long as they are available.

Figure 1. Host to ToR connectivity

VLANs and Subnets

Each ESXi host uses VLANs and corresponding subnets.  

Follow these guidelines.

  • Use only /24 subnets to reduce confusion and mistakes when dealing with IPv4 subnetting.

  • Use the IP address .253 as the (floating) interface with .251 and .252 for Virtual Router Redundancy Protocol (VRPP) or Hot Standby Routing Protocol (HSRP).

  • Use the RFC1918 IPv4 address space for these subnets and allocate one octet by region and another octet by function. For example, the mapping 172.regionid.function.0/24 results in the following sample subnets.


The following VLANs and IP ranges are meant as samples. Your actual implementation depends on your environment.

Table 1. Sample Values for VLANs and IP Ranges



Sample VLAN

Sample IP range



1611 (Native)










Shared Edge and Compute


1631 (Native)

Shared Edge and Compute



Shared Edge and Compute



Shared Edge and Compute



Access Port Network Settings

Configure additional network settings on the access ports that connect the leaf switch to the corresponding servers.

Spanning-Tree Protocol (STP)

Although this design does not use the spanning tree protocol, switches usually come with STP configured by default. Designate the access ports as trunk PortFast.


Configure the VLANs as members of a 802.1Q trunk with the management VLAN acting as the native VLAN.


Set MTU for all VLANS and SVIs (Management, vMotion, VXLAN and Storage) to jumbo frames for consistency purposes.

DHCP helper

Configure the VIF of the Management, vMotion and VXLAN subnet as a DHCP proxy.


Configure IGMP snooping on the ToR switches and include an IGMP querier on each VLAN.

Region Interconnectivity

The SDDC management networks, VXLAN kernel ports and the edge and compute VXLAN kernel ports of the two regions must be connected. These connections can be over a VPN tunnel, Point to Point circuits, MPLS, etc. End users must be able to reach the public-facing network segments (public management and tenant networks) of both regions.

The region interconnectivity design must support jumbo frames, and ensure latency is less than 150 ms. For more details on the requirements for region interconnectivity see the Cross-VC NSX Design Guide.

The design of a region connection solution is out of scope for this VMware Validated Design.