Following the vSphere design, the NSX for vSphere design consists of a management stack and a compute/edge stack in each region.

Management Stack

In the management stack, the underlying hosts are prepared for NSX for vSphere. The management stack has these components.

  • NSX Manager instances for both stacks (management stack and compute/edge stack)

  • NSX Controller cluster for the management stack

  • NSX ESG and DLR control VMs for the management stack

Compute/Edge Stack

In the compute/edge stack, the underlying hosts are prepared for NSX for vSphere. The compute/edge stack has these components.

  • NSX Controller cluster for the compute stack.

  • All NSX Edge service gateways and DLR control VMs of the compute stack that are dedicated to handling the north/south traffic in the data center. A shared edge and compute stack helps prevent VLAN sprawl because any external VLANs need only be trunked to the hosts in this cluster.

Table 1. vSphere Cluster Design Decisions

Decision ID

Design Decision

Design Justification

Design Implications

SDDC-VI-SDN-007

For the compute stack, do not use a dedicated edge cluster.

Simplifies configuration and minimizes the number of hosts required for initial deployment.

The NSX Controller instances, NSX Edge services gateways, and DLR control VMs of the compute stack are deployed in the shared edge and compute cluster.

The shared nature of the cluster will require the cluster to be scaled out as compute workloads are added so as to not impact network performance.

SDDC-VI-SDN-008

For the management stack, do not use a dedicated edge cluster.

The number of supported management applications does not justify the cost of a dedicated edge cluster in the management stack.

The NSX Controller instances, NSX Edge service gateways, and DLR control VMs of the management stack are deployed in the management cluster.

SDDC-VI-SDN-009

Apply vSphere Distributed Resource Scheduler (DRS) anti-affinity rules to the NSX components in both stacks.

Using DRS prevents controllers from running on the same ESXi host and thereby risking their high availability capability.

Additional configuration is required to set up anti-affinity rules.

The logical design of NSX considers the vCenter Server clusters and define the place where each NSX component runs.

Figure 1. Cluster Design for NSX for vSphere


NSX Manager instances for both management and compute workloads run on the management pod. The NSX Controllers, NSX Edge services gateways and distributed logical routers for the management appplications run on the management stack. The NSX Controllers, NSX Edge services gateways and distributed logical routers for the compute workloads run on the shared edge and compute pod.

High Availability of NSX for vSphere Components

The NSX Manager instances of both stacks run on the management cluster. vSphere HA protects the NSX Manager instances by ensuring that the NSX Manager VM is restarted on a different host in the event of primary host failure.

The NSX Controller nodes of the management stack run on the management cluster. The NSX for vSphere Controller nodes of the compute stack run on the shared edge and compute cluster. In both clusters, vSphere Distributed Resource Scheduler (DRS) rules ensure that NSX for vSphere Controller nodes do not run on the same host.

The data plane remains active during outages in the management and control planes although the provisioning and modification of virtual networks is impaired until those planes become available again.

The NSX Edge service gateways and DLR control VMs of the compute stack are deployed on the shared edge and compute cluster. The NSX Edge service gateways and DLR control VMs of the management stack run on the management cluster.

NSX Edge components that are deployed for north/south traffic are configured in equal-cost multi-path (ECMP) mode that supports route failover in seconds. NSX Edge components deployed for load balancing utilize NSX HA. NSX HA provides faster recovery than vSphere HA alone because NSX HA uses an active/passive pair of NSX Edge devices. By default the passive Edge device becomes active within 15 seconds. All NSX Edge devices are also protected by vSphere HA.

Scalability of NSX Components

A one-to-one mapping between NSX Manager instances and vCenter Server instances exists. If the inventory of either the management stack or the compute stack exceeds the limits supported by a single vCenter Server, then you can deploy a new vCenter Server instance, and must also deploy a new NSX Manager instance. You can extend transport zones by adding more shared edge and compute and compute clusters until you reach the vCenter Server limits. Consider the limit of 100 DLRs per ESXi host although the environment usually would exceed other vCenter Server limits before the DLR limit.