As a best practice, you must deploy a highly available NSX-T Manager instance so that the NSX-T central control place can continue propagating configuration to the transport nodes. You also select an NSX-T Manager appliance size according to the number of ESXi hosts required to run the SDDC management components.

Deployment Type

You can deploy NSX-T Manager in a one-node configuration or as a cluster for high availability.

Table 1. Design Decisions on the NSX-T Manager Deployment Type

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-MGMT-VI-SDN-011

Deploy three NSX-T Manager nodes for the management domain in the first cluster in the domain for configuring and managing the network services for SDDC management components.

SDDC management components can be placed on isolated virtual networks, using load balancing, logical switching, dynamic routing, and logical firewalls services.

  • You must turn on vSphere HA in the first cluster in the management domain.

  • The first cluster in the management domain requires four physical ESXi hosts for vSphere HA and for high availability of the NSX-T Manager cluster.

Sizing Compute and Storage Resources for NSX-T Manager

When you size the resources for NSX-T management components, consider the compute and storage requirements for each component, and the number of nodes per component type.

Table 2. NSX-T Manager Resource Specification

Appliance Size

vCPU

Memory (GB)

Storage (GB)

Scale

Extra-Small

2

8

300

Cloud Service Manager only

Small

4

16

300

Proof of concept

Medium

6

24

300

Up to 64 ESXi hosts

Large

12

48

300

More than 64 ESXi hosts

Table 3. Design Decisions on Sizing Resources for NSX-T Manager

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-MGMT-VI-SDN-012

Deploy each node in the NSX-T Manager cluster for the management domain as a medium-size appliance or larger.

A medium-size appliance is sufficient for providing network services to the SDDC management components.

If you extend the management domain, increasing the size of the NSX-T Manager appliances might be required.

High Availability of NSX-T Manager in a Single Region

The NSX-T Manager cluster runs on the first cluster in the management domain. vSphere HA protects the NSX-T Manager appliances by restarting an NSX-T Manager appliance on a different ESXi host if a primary ESXi host failure occurs.

Table 4. Design Decisions on the High Availability Configuration for NSX-T Manager

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-MGMT-VI-SDN-013

Create a virtual IP (VIP) address for the NSX-T Manager cluster for the management domain.

Provides high availability of the user interface and API of NSX-T Manager.

  • The VIP address feature provides high availability only. It it does not load balance requests across the cluster.

  • When using the VIP address feature, all NSX-T Manager nodes must be deployed on the same Layer 2 network.

SDDC-MGMT-VI-SDN-014

Apply VM-VM anti-affinity rules in vSphere Distributed Resource Scheduler (vSphere DRS) to the NSX-T Manager appliances.

Keeps the NSX-T Manager appliances running on different ESXi hosts for high availability.

  • You must allocate at least four physical hosts so that the three NSX-T Manager appliances continue running if an ESXi host failure occurs.

  • You must perform additional configuration for the anti-affinity rules.

SDDC-MGMT-VI-SDN-015

In vSphere HA, set the restart priority policy for each NSX-T Manager appliance to high.

  • NSX-T Manager implements the control plane for virtual network segments too. vSphere HA restarts the NSX-T Manager appliances first so that other virtual machines that are being powered on or migrated by using vSphere vMotion while the control plane is offline lose connectivity only until the control plane quorum is re-established.
  • Setting the restart priority to high reserves the highest priority for flexibility for adding services that must be started before NSX-T Manager.

If the restart priority for another management appliance is set to highest, the connectivity delays for management appliances will be longer.

High Availability of NSX-T Manager in Multiple Availability Zones

In an environment with multiple availability zones, the NSX-T Manager cluster runs in Availability Zone 1. If a failure in Availability Zone 1 occurs, the NSX-T Manager cluster is failed over to Availability Zone 2.

Table 5. Design Decisions on the High Availability Configuration for NSX-T Manager for Multiple Availability Zones

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-MGMT-VI-SDN-016

When using two availability zones, create a virtual machine group for the NSX-T Manager appliances.

Ensures that the NSX-T Manager appliances can be managed as a group.

You must add virtual machines to the allocated group manually.

SDDC-MGMT-VI-SDN-017

When using two availability zones, create a should-run VM-Host affinity rule to run the group of NSX-T Manager appliances on the group of hosts in Availability Zone 1.

  • Ensures that the NSX-T Manager appliances are located only in Availability Zone 1.

  • Failover of the NSX-T Manager appliances to Availability Zone 2 occurs only if a failure in Availability Zone 1 occurs.

  • Splitting NSX-T Manager appliances across availability zones does not improve availability.

Creating the rules is a manual task and adds administrative overhead.