Follow best practices for physical switches, switch connectivity, setup of VLANs and subnets, and access port settings.

Top of Rack Physical Switches

When configuring top of rack (ToR) switches, consider the following best practices.

  • Configure redundant physical switches to enhance availability.

  • Configure switch ports that connect to ESXi hosts manually as trunk ports. Virtual switches are passive devices and do not support trunking protocols, such as Dynamic Trunking Protocol (DTP).

  • Modify the Spanning Tree Protocol (STP) on any port that is connected to an ESXi NIC to reduce the time it takes to transition ports over to the forwarding state, for example, using the Trunk PortFast feature on a Cisco physical switch.

  • Provide DHCP or DHCP Helper capabilities on all VLANs that are used by the management and VXLAN VMkernel ports. This setup simplifies the configuration by using DHCP to assign IP address based on the IP subnet in use.

  • Configure jumbo frames on all switch ports, inter-switch link (ISL) and switched virtual interfaces (SVIs).

Top of Rack Connectivity and Network Settings

Each ESXi host is connected redundantly to the SDDC network fabric ToR switches by using a minimum of two 10-GbE ports (two 25-GbE ports are recommended). Configure the ToR switches to provide all necessary VLANs via an 802.1Q trunk. These redundant connections use features of vSphere Distributed Switch and NSX for vSphere to guarantee no physical interface is overrun and redundant paths are used as long as they are available.

Figure 1. Host-to-ToR Connection

In this Validated Design, each host is connected to a pair of ToR switches using a 25-GbE connection.

VLANs and Subnets

Each ESXi host uses VLANs and corresponding subnets. 

Follow these guidelines:

  • Use only /24 subnets to reduce confusion and mistakes when dealing with IPv4 subnetting.

  • Use the IP address .254 as the (floating) interface with .252 and .253 for Virtual Router Redundancy Protocol (VRPP) or Hot Standby Routing Protocol (HSRP).

  • Use the RFC1918 IPv4 address space for these subnets and allocate one octet by region and another octet by function. For example, the mapping 172.regionid.function.0/24 results in the following sample subnets.

Note:

The following VLANs and IP ranges are samples. Your actual implementation depends on your environment.

Table 1. Sample Values for VLANs and IP Ranges

Cluster

Availability Zone

Function

Sample VLAN

Sample IP range

Management

  • Availability Zone 1

  • Availability Zone 2

Management

1611 (Native, Stretched)

172.16.11.0/24

Availability Zone 1

vMotion

1612

172.16.12.0/24

VXLAN

1614

172.16.14.0/24

vSAN

1613

172.16.13.0/24

Availability Zone 2

Management

1621

172.16.21.0/24

vMotion

1622

172.16.22.0/24

VXLAN

1614

172.16.24.0/24

vSAN

1623

172.16.23.0/24

Shared Edge and Compute

  • Availability Zone 1

  • Availability Zone 2

Management

1631 (Native, Stretched)

172.16.31.0/24

Availability Zone 1

vMotion

1632

172.16.32.0/24

VXLAN

1634

172.16.34.0/24

vSAN

1633

172.16.33.0/24

Availability Zone 2

Management

1641

172.16.41.0/24

vMotion

1642

172.16.42.0/24

VXLAN

1634

172.16.44.0/24

vSAN

1643

172.16.43.0/25

Note:

Because NSX prepares the stretched cluster upon installation, the VXLAN VLANs have the same VLAN ID in both Availability Zone 1 and Availability Zone 2. The VLAN ID must exist in both availability zones but must map to a different routeable IP space in each zone so that NSX virtual tunnel end-points (VTEPs) can communicate.

Access Port Network Settings

Configure additional network settings on the access ports that connect the ToR switches to the corresponding servers.

Spanning Tree Protocol (STP)

Although this design does not use the STP, switches usually come with STP configured by default. Designate the access ports as trunk PortFast.

Trunking

Configure the VLANs as members of a 802.1Q trunk with the management VLAN acting as the native VLAN.

MTU

Set MTU for all VLANS and SVIs to the value for jumbo frames for consistency purposes.

DHCP helper

Configure the VIF of the Management and VXLAN subnet as a DHCP proxy.

Multicast

Configure IGMP snooping on the ToR switches and include an IGMP querier on each VXLAN VLAN.

Connectivity Between Regions

The SDDC management networks, VXLAN kernel ports, and the edge and compute VXLAN kernel ports of the two regions must be connected. These connections can be over a VPN tunnel, point-to-point circuits, MPLS, and so on. End users must be able to reach the public-facing network segments (public management and tenant networks) of both regions.

The region interconnectivity design must support jumbo frames, and ensure that latency is less than 150 ms. For more details on the requirements for region interconnectivity, see the Cross-VC NSX Design Guide.

The design of a solution for region interconnectivity is out of scope for this VMware Validated Design.

Connectivity Between Availability Zones

Consider the following connectivity requirements for multiple availability zones:

  • The latency between availability zones in the SDDC must be less than 5 ms.
  • The network bandwidth must be 10 Gbps or higher.
  • To support failover or VLAN-backed appliances such as vCenter Server, NSX Manager, and NSX Controller nodes, the management VLAN must be stretched between availability zones.

The design of a solution for availability zone interconnectivity is out of scope for this VMware Validated Design.