As a best practice, you must deploy a highly available NSX-T Manager instance so that the NSX-T central control place can continue propagating configuration to the transport nodes. You also select an NSX-T Manager appliance size according to the number of ESXi hosts required to run the SDDC tenant workloads.

Deployment Type

Table 1. Design Decisions on NSX-T Manager Deployment Type for a Workload Domain

Decision ID

Design Decision

Design Justification

Design Implication

SDDC- WLD-VI-SDN-011

Deploy three NSX-T Manager nodes for the workload domain in the first cluster in the management domain for configuring and managing the network services for SDDC tenant workloads.

SDDC tenant workloads can be placed on isolated virtual networks, using load balancing, logical switching, dynamic routing, and logical firewalls services.

None.

Sizing Compute and Storage Resources for NSX-T Manager

When you size the resources for NSX-T management components, consider the compute and storage requirements for each component, and the number of nodes per component type.
Table 2. NSX-T Manager Resource Specification

Appliance Size

vCPU

Memory (GB)

Storage (GB)

Scale

Extra-Small

2

8

200

Cloud Service Manager only

Small

4

16

200

Proof of concept

Medium

6

24

200

Up to 64 ESXi hosts

Large

12

48

200

More than 64 ESXi hosts

Table 3. Design Decisions on Sizing Resources for NSX-T Manager for a Workload Domain

Decision ID

Design Decision

Design Justification

Design Implication

SDDC- WLD-VI-SDN-012

Deploy each node in the NSX-T Manager cluster for the workload domain as a large-size appliance.

A large-size appliance is sufficient for providing network services to the SDDC tenant workloads.

You must provide enough compute and storage resources in the management domain to support this NSX-T Manager cluster.

High Availability of NSX-T Manager

The NSX-T Manager cluster for the workload domain runs on the first cluster in the management domain. vSphere HA protects the NSX-T Manager appliances by restarting an NSX-T Manager appliance on a different ESXi host if a primary ESXi host failure occurs.

Table 4. Design Decisions on the High Availability Configuration for NSX-T Manager for a Workload Domain

Decision ID

Design Decision

Design Justification

Design Implication

SDDC- WLD-VI-SDN-013

Create a virtual IP (VIP) address for the NSX-T Manager cluster for the workload domain.

Provides high availability of the user interface and API of NSX-T Manager.

  • The VIP address feature provides high availability only. It does not load balance requests across the cluster.

  • When using the VIP address feature, all NSX-T Manager nodes must be deployed on the same Layer 2 network.

SDDC- WLD-VI-SDN-014

Apply VM-VM anti-affinity rules in vSphere Distributed Resource Scheduler (vSphere DRS) to the NSX-T Manager appliances.

Keeps the NSX-T Manager appliances running on different ESXi hosts for high availability.

  • You must allocate at least four physical hosts so that the three NSX-T Manager appliances continue running if an ESXi host failure occurs.

  • You must perform additional configuration for the anti-affinity rules.

SDDC- WLD-VI-SDN-015

In vSphere HA, set the restart priority policy for each NSX-T Manager appliance to high.

  • NSX-T Manager implements the control plane for virtual network segments. If the NSX-T Manager cluster is restarted, applications that are connected to the NSX-T VLAN or virtual network segments lose connectivity only for a short time until the control plane quorum is re-established.

  • Setting the restart priority to high reserves highest for future needs.

If the restart priority for another management appliance is set to highest, the connectivity delays for services will be longer.

High Availability of NSX-T Manager in Multiple Availability Zones

In an environment with multiple availability zones, the NSX-T Manager cluster runs in Availability Zone 1. If a failure in Availability Zone 1 occurs, the NSX-T Manager cluster is failed over to Availability Zone 1. All three appliances are in the same availability zone because separating the manager appliances across availability zone does not improve high availability of the cluster.

Table 5. Design Decisions on the High Availability Configuration for NSX-T Manager for Multiple Availability Zones for a Workload Domain

Decision ID

Design Decision

Design Justification

Design Implication

SDDC- WLD-VI-SDN-016
When using two availability zones, create a virtual machine group for the NSX-T Manager appliances.

Ensures that the NSX-T Manager appliances can be managed as a group.

You must add virtual machines to the allocated group manually.

SDDC- WLD-VI-SDN-017
When using two availability zones, create a should-run VM-Host affinity rule to run the NSX-T Manager appliances in Availability Zone 1.
  • Ensures that the NSX-T Manager appliances are located only in Availability Zone 1.

  • Failover of the NSX-T Manager appliances to Availability Zone 2 happens only a failure in Availability Zone 1 occurs.

  • Splitting NSX-T Manager appliances across availability zones does not improve availability.

Creating the rules is a manual task and adds administrative overhead.