As a best practice, you must deploy a highly available NSX Manager instance so that the NSX-T central control plane can continue propagating configuration to the transport nodes. You also select an NSX Manager appliance size according to the number of ESXi hosts required to run the SDDC tenant workloads.

Deployment Type

You can deploy NSX Manager in a one-node configuration or as a three-node cluster for high availability.

Table 1. Design Decisions on NSX Manager Deployment Type

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-NSX-CFG-001

Deploy three NSX Manager nodes for the VI workload domain in the first cluster in the management domain for configuring and managing the network services for customer workloads.

Customer workloads can be placed on isolated virtual networks, using load balancing, logical switching, dynamic routing, and logical firewalls services.

None.

Sizing Compute and Storage Resources for NSX Manager

When you size the resources for the management components in an NSX-T Data Center deployment, consider the compute and storage requirements for each component, and the number of nodes per component type.
Table 2. NSX Manager Resource Specification

Appliance Size

vCPU

Memory (GB)

Storage (GB)

Scale

Extra-Small

2

8

300

Cloud Service Manager only

Small

4

16

300

Proof of concept

Medium

6

24

300

Up to 128 ESXi hosts

Large

12

48

300

Up to 1,024 ESXi hosts

Table 3. Design Decisions on Sizing Resources for NSX Manager

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-NSX-CFG-002

Deploy each node in the NSX Manager cluster for the workload domain as a large-size appliance.

A large-size appliance is sufficient for providing network services to the SDDC tenant workloads.

You must provide enough compute and storage resources in the management domain to support this NSX Manager cluster.

High Availability of NSX Manager for a Single VMware Cloud Foundation Instance

The NSX Manager cluster for the VI workload domain runs on the default management vSphere cluster. vSphere HA protects the NSX Manager appliances by restarting an NSX Manager appliance on a different ESXi host if a primary ESXi host failure occurs.

Table 4. Design Decisions on High Availability for NSX Manager

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-NSX-CFG-003

Create a virtual IP (VIP) address for the NSX Manager cluster for the VI workload domain.

Provides high availability of the user interface and API of NSX Manager.

  • The VIP address feature provides high availability only. It does not load balance requests across the cluster.

  • When using the VIP address feature, all NSX Manager nodes must be deployed on the same Layer 2 network.

VCF-WLD-NSX-CFG-004

Apply VM-VM anti-affinity rules in vSphere Distributed Resource Scheduler (vSphere DRS) to the NSX Manager appliances.

Keeps the NSX Manager appliances running on different ESXi hosts for high availability.

  • You must allocate at least four physical hosts so that the three NSX Manager appliances continue running if an ESXi host failure occurs.

  • You must perform additional configuration for the anti-affinity rules.

VCF-WLD-NSX-CFG-005

In vSphere HA, set the restart priority policy for each NSX Manager appliance to high.

  • NSX Manager implements the control plane for virtual network segments. If the NSX Manager cluster is restarted, applications that are connected to NSX VLAN-backed or overlay-backed segments lose connectivity only for a short time until the control plane quorum is re-established.

  • Setting the restart priority to high reserves highest for future needs.

If the restart priority for another management appliance is set to highest, the connectivity delays for services will be longer.

High Availability of NSX Manager for a Single VMware Cloud Foundation Instance with Multiple Availability Zones

The NSX Manager cluster runs on the default management vSphere cluster in the first availability zone. If a failure in this zone occurs, the NSX Manager cluster is failed over to the second availability zone.

Table 5. Design Decisions on High Availability for NSX Manager for Multiple Availability Zones

Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-NSX-CFG-006

Add the NSX Manager appliances to the virtual machine group for the first availability zone.

Ensures that, by default, the NSX Manager appliances are powered on within the primary availability zone hosts group.

None.