In an environment with multiple availability zones, Layer 2 networks must be stretched between the availability zones by the physical infrastructure. You also must provide a Layer 3 gateway that is highly available between availability zones. The method for stretching these Layer 2 networks and providing a highly available Layer 3 gateway is vendor-specific.

VLANs and Subnets for Multiple Available Zones

This section displays a sample configuration for an environment with multiple availability zones. The management, Uplink 01, Uplink 02, and Edge Overlay networks in each availability zone must be stretched to facilitate failover of the NSX-T Edge appliances between availability zones. The Layer 3 gateway for the management and Edge Overlay networks must be highly available across the availability zones.
Note: The management network VLAN can be the same for the management domain and VI workload domains, although the table below shows an example where these VLANs are different (1611 vs 1631).
Table 1. Management Domain VLAN and IP Subnet Requirements

Function

Availability Zone 1

Availability Zone 2

VLAN ID

IP Range

HA Layer 3 Gateway

Recommended MTU

Management (AZ1 and AZ2)

1611 (Stretched)

172.16.11.0/24

1500

vSphere vMotion

X

1612

172.16.12.0/24

9000

vSAN

X

1613

172.16.13.0/24

9000

NSX-T Host Overlay

X

1614

172.16.14.0/24

9000

NSX-T Edge Uplink01

2711 (Stretched)

172.27.11.0/24

X

9000

NSX-T Edge Uplink02

2712 (Stretched)

172.27.12.0/24

X

9000

NSX-T Edge Overlay

2713 (Stretched)

172.27.13.0/24

9000

vSphere vMotion

X

1622

172.16.22.0/24

9000

vSAN

X

1623

172.16.23.0/24

9000

Host Overlay

X

1624

172.16.24.0/24

9000
Note: If a VLAN is stretched between AZ1 and AZ2, then the data center needs to provide appropriate routing and failover of the gateway for that network.
Table 2. Workload Domain VLAN and IP Subnet Requirements

Function

Availability Zone 1

Availability Zone 2

VLAN ID

IP Range

HA Layer 3 Gateway

Management (AZ1 and AZ2)

1631

172.16.31.0/24

vSphere vMotion

X

1632

172.16.32.0/24

vSAN

X

1633

172.16.33.0/24

Host Overlay

X

1634

172.16.34.0/24

vSphere vMotion

X

2732

172.27.32.0/24

vSAN

X

2733

172.16.33.0/24

Host Overlay

X

1621

172.16.21.0/24

Networking for Multiple Availability Zones

There are specific physical data center network requirements for a topology with multiple availability zones.

Table 3. Physical Network Requirements for Multiple Availability Zone

Component

Requirement

MTU

  • VLANs which are stretched between availability zones must meet the same requirements as the VLANs for intra-zone connection including MTU.

  • MTU value must be consistent end-to-end including components on the inter zone networking path.

  • Set MTU for all VLANs and SVIs (management, vMotion, Geneve, and Storage) to jumbo frames for consistency purposes. Geneve overlay requires an MTU of 1600 or greater.

Layer 3 gateway availability

For VLANs that are stretched between available zones, configure data center provided method, for example, VRRP or HSRP, to failover the Layer 3 gateway between availability zones.

DHCP availability

For VLANs that are stretched between availability zones, provide high availability for the DHCP server so that a failover operation of a single availability zone will not impact DHCP availability.
Note: You cannot stretch a cluster that uses static IP addresses for the NSX-T Host Overlay Network TEPs.

BGP routing

Each availability zone data center must have its own Autonomous System Number (ASN).

Ingress and egress traffic

  • For VLANs that are stretched between availability zones, traffic flows in and out of a single zone. Local egress is not supported.

  • For VLANs that are not stretched between availability zones, traffic flows in and out of the zone where the VLAN is located.

  • For NSX-T virtual network segments that are stretched between regions, traffic flows in and out of a single availability zone. Local egress is not supported.

Latency

  • Maximum network latency between NSX-T Managers is 10 ms.

  • Maximum network latency between the NSX-T Manager cluster and transport nodes is 150 ms.