The NSX-T design uses management, and shared edge and compute clusters. You can add more compute clusters for scale-out, or different workload types or SLAs.

The logical NSX-T design considers the vSphere clusters and defines the place where each NSX component runs.

Figure 1. NSX-T Cluster Design




Management Cluster

The management cluster contains all components for managing the SDDC. This cluster is a core component of the VMware Validated Design for Software-Defined Data Center. For information about the management cluster design, see the Architecture and Design documentation in VMware Validated Design for Software-Defined Data Center.

NSX-T Edge Node Cluster

The NSX-T Edge cluster is a logical grouping of NSX-T Edge virtual machines. These NSX-T Edge virtual machines run in the vSphere shared edge and compute cluster and provide North-South routing for the workloads in the compute clusters.

Shared Edge and Compute Cluster

In the shared edge and compute cluster, ESXi hosts are prepared for NSX-T. As a result, they can be configured as transport nodes and can participate in the overlay network. All tenant workloads, and NSX-T Edge virtual machines run in this cluster.

Table 1. Cluster Design Decisions

Decision ID

Design Decision

Design Justification

Design Implications

NSXT-VI-SDN-009

For the compute stack, do not dedicate an edge cluster.

Simplifies configuration and minimizes the number of ESXi hosts required for initial deployment.

The NSX-T Edge virtual machines are deployed in the shared edge and compute cluster.

Because of the shared nature of the cluster, you must scale out the cluster as compute workloads are added to avoid an impact on network performance.

NSXT-VI-SDN-010

Deploy at least two large-size NSX-T Edge virtual machines in the shared edge and compute cluster.

Creates NSX-T Edge cluster, and meets availability and scale requirements.

You must add the edge virtual machines as transport nodes before you add them to the NSX-T Edge cluster.

NSXT-VI-SDN-011

Apply vSphere Distributed Resource Scheduler (vSphere DRS) VM-Host anti-affinity rules to NSX-T Controllers.

Prevents controllers from running on the same ESXi host and thereby risking their high availability capability.

Requires at least four physical hosts to guarantee the three NSX-T Controllers continue to run if an ESXi host failure occurs.

Additional configuration is required to set up anti-affinity rules.

NSXT-VI-SDN-012

Apply vSphere DRS VM-Host anti-affinity rules to the virtual machines of the NSX-T Edge cluster.

Prevents the NSX-T Edge virtual machines from running on the same ESXi host and compromising their high availability.

Additional configuration is required to set up anti-affinity rules.

High Availability of NSX-T Components

The NSX-T Manager runs on the management cluster. vSphere HA protects the NSX-T Manager by restarting the NSX-T Manager virtual machine on a different ESXi host if a primary ESXi host failure occurs.

The NSX-T Controller nodes also run on the management cluster. vSphere DRS anti-affinity rules prevent NSX-T Controller nodes from running on the same ESXi host.

The data plane remains active during outages in the management and control planes although the provisioning and modification of virtual networks is impaired until those planes become available again.

The NSX-T Edge virtual machines are deployed on the shared edge and compute cluster. vSphere DRS anti-affinity rules prevent NSX-T Edge virtual machines that belong to the same NSX-T Edge cluster from running on the same ESXi host.

NSX-T SRs for North-South routing are configured in equal-cost multi-path (ECMP) mode that supports route failover in seconds.