NSX-T Data Center provides networking services to SDDC management workloads such as load balancing, routing and virtual networking. NSX-T Data Center is connected to the region-specific Workspace ONE Access for central user management.

Figure 1. NSX-T Logical Design for the Management Domain

The NSX-T Manager three-node cluster is the central component of the logical design. It is connected to the NSX-T Edge two-node cluster and to the ESXi host transport nodes. For identity management, NSX-T Manager is connected to Workspace ONE Access. For access to the management host, NSX-T Manager is connected to the management domain vCenter Server. Users access NSX-T Manager by using user interface or API.
Figure 2. NSX-T Logical Design for the Management Domain for a Multi-Region SDDC
The NSX-T Global Manager and NSX-T Local Manager three-node clusters in each regions are the central components of the logical design. The NSX-T Global Manager cluster in Region A is active, and the NSX-T Global Manager cluster in Region B is in standby mode. In each region, the NSX-T Local Manager instance is connected to the NSX-T Edge two-node cluster and to the ESXi host transport nodes. For identity management, all NSX-T Manager instances are connected to a region-specific Workspace ONE Access. For access to the management hosts in each region, the NSX-T Local Manager is connected to the management domain vCenter Server. Users access NSX-T Manager by using user interface or API.

An NSX-T Data Center deployment consists of these components:

  • Unified appliances that have both the NSX-T Local Manager and NSX-T Controller roles. They provide management and control plane capabilities.

  • NSX-T Edge nodes that provide advanced services such as load balancing, and north-south connectivity.

  • The ESXi hosts within the management domain are registered as NSX-T transport nodes to provide distributed routing and firewall services to management workloads.

To support a multi-region SDDC architecture, you add the following components:

  • NSX-T Global Manager cluster in each of the first two regions.

    You deploy the NSX-T Global Manager cluster in each region so that you can use NSX-T Federation for global management of networking and security services.

  • An additional infrastructure VLAN in each region to carry region-to-region traffic.

Table 1. NSX-T Logical Components

Component

Single Region

Multiple Availability Zones

Multiple Regions

NSX-T Manager Cluster

  • Three medium-size NSX-T Local Manager nodes with an internal virtual IP (VIP) address for high availability

  • vSphere HA protects the NSX-T Manager cluster nodes applying high restart priority

  • vSphere DRS rule keeps the NSX-T Manager nodes running on different hosts

  • Three medium-size NSX-T Local Manager nodes in Availability Zone 1 with an internal virtual IP (VIP) address for high availability

  • vSphere should-run DRS rule keeps the NSX-T Manager nodes running in Availability Zone 1. Failover to Availability Zone 2 occurs only if a failure in Availability Zone 1 occurs.

  • In the availability zone, vSphere HA protects the cluster nodes applying high restart priority

  • In the availability zone, vSphere DRS rule keeps the nodes running on different hosts

In Region A:

  • Three medium-size NSX-T Global Manager nodes with an internal VIP address for high availability.

    The NSX-T Global Manager cluster is

    set as Active

In Region B:

  • Three medium-size NSX-T Global Manager nodes with an internal VIP address for high availability.

    The NSX-T Global Manager cluster is

    set as Standby

In each region in an SDDC with two or more regions:

  • Three medium-size NSX-T Local Manager nodes with an internal virtual IP (VIP) address for high availability

  • vSphere HA protects the nodes of both the NSX-T Global Manager and NSX-T Local Manager clusters, applying high restart priority
  • vSphere DRS rule keeps the nodes of both the NSX-T Global Manager and NSX-T Local Manager clusters running on different hosts

NSX-T Edge Cluster

  • Two medium-size NSX-T Edge nodes

  • vSphere HA protects the NSX-T Edge nodes applying high restart priority

  • vSphere DRS rule keeps the NSX-T Edge nodes running on different hosts

  • Two medium-size NSX-T Edge nodes in Availability Zone 1

  • vSphere should-run DRS rule keeps the NSX-T Edge nodes running in Availability Zone 1. Failover to Availability Zone 2 occurs only if a failure in Availability Zone 1 occurs.

  • In the availability zone, vSphere HA protects the NSX-T Edge nodes applying high restart priority

  • In the availability zone, vSphere DRS rule keeps the NSX-T Edge nodes running on different hosts

In each region in an SDDC with two or more regions:

  • Two medium-size NSX-T Edge nodes

  • vSphere HA protects the NSX-T Edge nodes applying high restart priority

  • vSphere DRS rule keeps the NSX-T Edge nodes running on different hosts

Transport Nodes

  • Four ESXi host transport nodes

  • Two Edge transport nodes

  • In each availability zone, four ESXi host transport nodes

  • Two Edge transport nodes in Availability Zone 1

In each region in an SDDC with two or more regions:

  • Four ESXi host transport nodes

  • Two Edge transport nodes

Transport Zones

  • One VLAN transport zone for NSX-T Edge uplink traffic

  • One overlay transport zone for SDDC management components and NSX-T Edge nodes

  • One VLAN transport zone for NSX-T Edge uplink traffic

  • One overlay transport zone for SDDC management components and NSX-T Edge nodes

In each region in an SDDC with two or more regions:

  • One VLAN transport zone for NSX-T Edge uplink traffic

  • One overlay transport zone for SDDC management components and NSX-T Edge nodes

VLANs and IP Subnets

  • Management

  • vSphere vMotion

  • vSAN

  • Host Overlay

  • NFS

  • Uplink01

  • Uplink02

  • Edge Overlay

Networks for Availability Zone 1:

  • Management (stretched)

  • vSAN

  • vSphere vMotion

  • Host Overlay

  • NFS

  • Uplink01 (stretched)

  • Uplink02 (stretched)

  • Edge Overlay (stretched)

Networks for Availability Zone 2:

  • Management (stretched)

  • vSAN

  • vSphere vMotion

  • Host Overlay

In each region in an SDDC with two or more regions:

  • Management

  • vSphere vMotion

  • vSAN

  • Host Overlay

  • NFS

  • Uplink01

  • Uplink02

  • Edge Overlay

  • For a configuration with a single availability zone in each region:

    Edge RTEP

  • For a configuration with multiple availability zones in Region A: Edge RTEP (stretched) in Region A and Edge RTEP (regular) in the other regions

Routing Configuration

BGP

BGP with ingress and egress traffic to Availability Zone 1 with limited exceptions.

BGP