VMware Validated Design provides alternative guidance for implementing an SDDC that contains two availability zones. You configure vSAN stretched clusters in the management domain and the workload domains to create second availability zones. The SDDC continues operating during host maintenance or if a loss of one availability zone occurs.

In a stretched cluster configuration, both availability zones are active. If a failure in either availability zone occurs, the virtual machines are restarted in the operational availability zone because virtual machine writes occur to both availability zones synchronously.

Overview of vSAN Stretched Cluster

Virtual machine write operations are performed synchronously across both availability zones. Each availability zone has a copy of the data and witness components are placed on the witness host in a third location in the SDDC. As a result of distance and latency requirements, multiple availability zones are typically used in metropolitan or campus environments.

Extending the management cluster to a vSAN stretched cluster provides the following advantages:

  • Increased availability with minimal downtime and data loss

  • Inter-site load balancing

Using a vSAN stretched cluster for the management components has the following disadvantages:

  • Increased footprint

  • Symmetrical host configuration in the two availability zones
  • Distance and latency requirements between the two availability zones

  • Additional setup and more complex Day-2 operations

  • Licensing requirements

Regions and Availability Zones

In the multi-availability zone version of the VMware Validated Design, you have two availability zones in Region A.


Availability Zone

Availability Zone and Region Identifier

Region-Specific Domain Name

Region A

Availability Zone 1



Region A

Availability Zone 2



Region B



Physical Infrastructure

You must use homogenous physical servers between availability zones. You replicate the hosts for the first cluster in the management domain and shared edge and workload cluster in a workload domain, and you place them in the same rack.

Figure 1. Infrastructure Architecture for Multiple Availability Zones

Component Layout with Two Availability Zones

The management components of the SDDC run in Availability Zone 1. They can be migrated to Availability Zone 2 when an outage or overload occurs in Availability Zone 1.

You can start deploying the SDDC in a single availability zone configuration, and then extend the environment with the second availability zone.

Figure 2. vSphere Logical Cluster Layout for Multiple Availability Zones for the Management Domain

Network Configuration

NSX-T Edge nodes connect to top of rack switches in each data center to support northbound uplinks and route peering for SDN network advertisement. This connection is specific to the top of rack switch that you are connected to.

Figure 3. Dynamic Routing in Multiple Availability Zones

If an outage of an availability zone occurs, vSphere HA fails over the edge appliances to the other availability zone by using vSphere HA. Availability Zone 2 must provide an analog of the network infrastructure which the edge node is connected to in Availability Zone 1.

The management network in the primary availability zone, and the Uplink 01, Uplink 02, and Edge Overlay networks in each availability zone must be stretched to facilitate failover of the NSX-T Edge appliances between availability zones. The Layer 3 gateway for the management and Edge Overlay networks must be highly available across the availability zones.

The network between the availability zones should support jumbo frames and its latency must be less than 5 ms. Use a 25-GbE connection with vSAN for best and predictable performance (IOPS) of the environment.

Table 1. Networks That Are Stretched Across Availability Zones

Stretched Network

Requires HA Layer 3 Gateway

Management for Availability Zone 1





Edge overlay

Witness Appliance

When using two availability zones, deploy a vSAN witness appliance in a location that is not local to the ESXi hosts in any of the availability zones.

VMware Validated Design uses vSAN witness traffic separation where you can use a VMkernel adapter for vSAN witness traffic that is different from the adapter for vSAN data traffic. In this design, you configure vSAN witness traffic in the following way:

  • On each management ESXi host in both availability zones, the vSAN witness traffic is placed on the management VMkernel adapter.
  • On the vSAN witness appliance, you use the same VMkernel adapter for both management and witness traffic.
Figure 4. vSAN Witness Network Design in the Management Domain

Figure 5. vSAN Witness Network Design in a Virtual Infrastructure Workload Domain