Use this design decision list for reference related to the configuration of NSX-T Data Center in an environment with a single or multiple VMware Cloud Foundation instances. The design also considers if an instance contains a single or multiple availability zones.

The NSX-T Data Center design covers the following areas:

  • Physical network infrastructure

  • Deployment of and secure access to the NSX-T Data Center nodes

  • Dynamic routing configuration and load balancing

  • NSX segment organization

  • NSX Federation

After you set up the physical network infrastructure, the configuration tasks for most design decisions are automated in VMware Cloud Foundation. You must perform the configuration manually only for a limited number of decisions as noted in the design implication.

For full design details, see NSX-T Data Center Design for the Management Domain.

Physical Network Infrastructure Design

Table 1. Design Decisions on the Physical Network Infrastructure for NSX-T Data Center

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-PHY-001

Use two ToR switches for each rack.

Supports the use of two 10 GbE (25 GbE or greater recommended) links to each server, provides redundancy and reduces the overall design complexity.

Requires two ToR switches per rack which can increase costs.

VCF-MGMT-NSX-PHY-002

Implement the following physical network architecture:

  • One 25 GbE (10 GbE minimum) port on each ToR switch for ESXi host uplinks (Host to ToR).

  • Layer 3 device that supports BGP.

  • Provides availability during a switch failure.

  • Provides support for BGP as the only dynamic routing protocol that is supported by NSX-T Data Center.

  • Might limit the hardware choices.

  • Requires dynamic routing protocol configuration in the physical network

VCF-MGMT-NSX-PHY-003

Do not use EtherChannel (LAG, LACP, or vPC) configuration for ESXi host uplinks

  • Simplifies configuration of top of rack switches.

  • Teaming options available with vSphere Distributed Switch and N-VDS provide load balancing and failover.

  • EtherChannel implementations might have vendor-specific limitations.

None.

VCF-MGMT-NSX-PHY-004

Use a physical network that is configured for BGP routing adjacency

  • Supports design flexibility for routing multi-site and multi-tenancy workloads.

  • BGP is the only dynamic routing protocol that is supported by NSX-T with NSX Federation.

  • Supports failover between ECMP Edge uplinks.

Requires BGP configuration in the physical network.

Table 2. Design Decisions on Access Ports for NSX-T Data Center

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-PHY-005

Assign persistent IP configurations for NSX tunnel endpoints (TEPs) that use dynamic IP allocation.

Ensures that endpoints have a persistent management IP address.

In VMware Cloud Foundation, TEP IP assignment over DHCP is required for advanced deployment topologies such as a management domain with mulitple availability zones.

Requires precise IP address management.

VCF-MGMT-NSX-PHY-006

Set the lease duration for the DHCP scope for the host overlay network to at least 7 days.

The IP addresses of the host overlay VMkernel ports are assigned by using a DHCP server.

  • Host overlay VMkernel ports do not have an administrative endpoint. As a result, they can use DHCP for automatic IP address assignment. If you must change or expand the subnet, changing the DHCP scope is simpler than creating an IP pool and assigning it to the ESXi hosts.

  • DHCP simplifies the configuration of a default gateway for host overlay VMkernel ports if hosts within same cluster are on separate Layer 2 domains.

Requires configuration and management of a DHCP server.

VCF-MGMT-NSX-PHY-007

Use VLANs to separate physical network functions.

  • Supports physical network connectivity without requiring many NICs.

  • Isolates the different network functions in the SDDC so that you can have differentiated services and prioritized traffic as needed.

Requires uniform configuration and presentation on all the trunks that are made available to the ESXi hosts.

Table 3. Design Decisions on Jumbo Frames for NSX-T Data Center

Decision ID

Decision Decision

Decision Justification

Decision Implication

SDDC-MGMT-VI-PHY-008

Set the MTU size to at least 1,700 bytes (recommended 9,000 bytes for jumbo frames) on the physical switch ports, vSphere Distributed Switches, vSphere Distributed Switch port groups, and N-VDS switches that support the following traffic types:

  • Overlay (Geneve)

  • vSAN

  • vSphere vMotion

  • NFS

  • Improves traffic throughput.

  • Supports Geneve by increasing the MTU size to a minimum of 1,600 bytes.

  • Geneve is an extensible protocol. The MTU size might increase with future capabilities. While 1,600 is sufficient, an MTU size of 1,700 bytes provides more room for increasing the Geneve MTU size without the need to change the MTU size of the physical infrastructure.

When adjusting the MTU packet size, you must also configure the entire network path (VMkernel network adapters, virtual switches, physical switches, and routers) to support the same MTU packet size.

Table 4. Design Decisions on the Physical Network Infrastructure for Multiple Available Zones for NSX-T Data Center

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-PHY-009

Set the MTU size to at least 1,700 bytes (recommended 9,000 bytes for jumbo frames) on physical inter-availability zone networking components which are part of the networking path between availability zones for the following traffic types.

  • Overlay (Geneve)

  • vSAN

  • vSphere vMotion

  • NFS

  • Improves traffic throughput.
  • Geneve packets are tagged as do not fragment.
  • For optimal performance, provides a consistent MTU size across the environment.
  • Geneve is an extensible protocol. The MTU size might increase with future capabilities. While 1,600 is sufficient, an MTU size of 1,700 bytes provides more room for increasing the Geneve MTU size without the need to change the MTU size of the physical infrastructure.

When adjusting the MTU packet size, you must also configure the entire network path (VMkernel ports, virtual switches, physical switches, and routers) to support the same MTU packet size.

In an environment with multiple availability zones, the MTU must be configured on the entire network path between the zones.

VCF-MGMT-NSX-PHY-010

Configure VRRP, HSRP, or another Layer 3 gateway availability method for these networks.

  • Management
  • Edge overlay

Ensures that the VLANs that are stretched between availability zones are connected to a highly- available gateway. Otherwise, a failure in the Layer 3 gateway will cause disruption in the traffic in the SDN setup.

Requires configuration of a high availability technology for the Layer 3 gateways in the data center.

Table 5. Design Decisions on the Physical Network Infrastructure between the VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-PHY-011

Set the MTU size to at least 1,500 bytes (1,700 bytes preferred, 9,000 bytes recommended for jumbo frames) on the components of the physical network between the VMware Cloud Foundation instances which are part of the network path between availability zones for the following traffic types.

  • NSX Edge RTEP
  • Jumbo frames are not required between VMware Cloud Foundation instances. However, increased MTU improves traffic throughput.
  • Increasing the RTEP MTU to 1,700 bytes minimizes fragmentation for standard-size workload packets between VMware Cloud Foundation instances.

When adjusting the MTU packet size, you must also configure the entire network path, that is, virtual interfaces, virtual switches, physical switches, and routers to support the same MTU packet size.

VCF-MGMT-NSX-PHY-012

Provide a routed connection between each NSX Manager cluster in each VMware Cloud Foundation instance.

Configuring NSX Federation requires connectivity between the NSX Global Manager instances, NSX Local Manager instances, and NSX Edge clusters.

Requires unique routable IP addresses for each fault domain.

VCF-MGMT-NSX-PHY-013

Ensure that the latency between VMware Cloud Foundation instances is less than 150 ms

A latency lower than 150 ms is required for the following features.

  • Cross vCenter Server vMotion
  • NSX Federation

None.

VCF-MGMT-NSX-PHY-014

Provide BGP routing between all VMware Cloud Foundation instance.

Automated failover of networks requires a dynamic routing protocol, such as BGP.

None.

NSX Manager Deployment Specification

Table 6. Design Decisions on the NSX Manager Deployment Type

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-CFG-001

Deploy three NSX Manager nodes in the default vSphere cluster in the management domain for configuring and managing the network services for the management domain.

The management components can be placed on isolated virtual networks, using load balancing, logical switching, dynamic routing, and logical firewalls services.

  • You must turn on vSphere HA in the default cluster in the management domain.

  • The default management cluster requires four physical ESXi hosts for vSphere HA and for high availability of the NSX Manager cluster.

Table 7. Design Decisions on Sizing Resources for NSX Manager

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-CFG-002

Deploy each node in the NSX Manager cluster for the management domain as a medium-size appliance or larger.

A medium-size appliance is sufficient for providing network services to the management domain components.

If you extend the management domain, increasing the size of the NSX Manager appliances might be required.

Table 8. Design Decisions on High Availability for NSX Manager

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-CFG-003

Create a virtual IP (VIP) address for the NSX Manager cluster for the management domain.

Provides high availability of the user interface and API of NSX Manager.

  • The VIP address feature provides high availability only. It does not load-balance requests across the cluster.

  • When using the VIP address feature, all NSX Manager nodes must be deployed on the same Layer 2 network.

VCF-MGMT-NSX-CFG-004

Apply VM-VM anti-affinity rules in vSphere Distributed Resource Scheduler (vSphere DRS) to the NSX Manager appliances.

Keeps the NSX Manager appliances running on different ESXi hosts for high availability.

  • You must allocate at least four physical hosts so that the three NSX Manager appliances continue running if an ESXi host failure occurs.

VCF-MGMT-NSX-CFG-005

In vSphere HA, set the restart priority policy for each NSX Manager appliance to high.

  • NSX Manager implements the control plane for virtual network segments. vSphere HA restarts the NSX Manager appliances first so that other virtual machines that are being powered on or migrated by using vSphere vMotion while the control plane is offline lose connectivity only until the control plane quorum is re-established.

  • Setting the restart priority to high reserves the highest priority for flexibility for adding services that must be started before NSX Manager.

If the restart priority for another management appliance is set to highest, the connectivity delay for management appliances will be longer.

Table 9. Design Decisions on High Availability for NSX Manager for a Single VMware Cloud Foundation Instance with Multiple Availability Zones

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-CFG-006

Add the NSX Manager appliances to the virtual machine group for the first availability zone.

Ensures that, by default, the NSX Manager appliances are powered on on a host in the primary availability zone.

None.

NSX Manager Network Design

Table 10. Design Decisions on the Network Segment for NSX Manager

Decision ID

Design Decision

Design Justification

Decision Implication

VCF-MGMT-NSX-NET-001

Place the appliances of the NSX Manager cluster on the management VLAN network in the management domain.

  • Provides direct secure connection to the ESXi hosts and vCenter Server for edge node management and distributed network services.

  • Reduces the number of required VLANs because a single VLAN can be allocated to both, vCenter Server and NSX.

None.

Table 11. Design Decisions on the IP Addressing Scheme for NSX Manager

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-NET-002

Allocate a statically assigned IP address and host name to the nodes of the NSX Manager cluster.

Ensures stability across the private cloud, makes it simpler to maintain and track, and to implement a DNS configuration.

Requires precise IP address management.

Table 12. Design Decisions on Name Resolution for NSX Manager

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-NET-003

Configure forward and reverse DNS records for the nodes of the NSX Manager cluster for the management domain.

The NSX Manager nodes and VIP address are accessible by using fully qualified domain names instead of by using IP addresses only.

You must provide DNS records for the NSX Manager nodes for the management domain in each VMware Cloud Foundation instance.

Table 13. Design Decisions on Time Synchronization for NSX Manager

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-NET-004

Configure NTP on each NSX Manager appliance.

NSX Manager depends on time synchronization.

None.

NSX Global Manager Deployment Specification

Table 14. Design Decisions on the NSX Global Manager Deployment Type for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-FED-CFG-001

Deploy three NSX Global Manager nodes for the management domain in the default cluster in the domain for configuring and managing the network services for the management domain components.

Some management components must be placed on isolated virtual networks, using load balancing, logical switching, dynamic routing, and logical firewalls services.

  • You must turn on vSphere HA in the default vSphere cluster in the management domain.

  • The default vSphere cluster in the management domain requires four physical ESXi hosts for vSphere HA and for high availability of the NSX Manager cluster.

Table 15. Design Decisions on Sizing Resources for NSX Global Manager for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-FED-CFG-002

Deploy each node in the NSX Global Manager cluster for the management domain as a medium-size appliance or larger.

A medium-size appliance is sufficient for providing network services to the management components of the private cloud.

If you extend the management domain, increasing the size of the NSX Manager appliances might be required.

Table 16. Design Decisions on High Availability for NSX Global Manager for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-FED-CFG-003

Create a virtual IP (VIP) address for the NSX Global Manager cluster for the management domain.

Provides high availability of the user interface and API of NSX Global Manager.

  • The VIP address feature provides high availability only. It does not load-balance requests across the cluster.

  • When using the VIP address feature, all NSX Global Manager nodes must be deployed on the same Layer 2 network.

VCF-MGMT-NSX-FED-CFG-004

Apply VM-VM anti-affinity rules in vSphere DRS to the NSX Global Manager appliances.

Keeps the NSX Global Manager appliances running on different ESXi hosts for high availability.

  • You must allocate at least four physical hosts so that the three NSX Manager appliances continue running if an ESXi host failure occurs.

VCF-MGMT-NSX-FED-CFG-005

In vSphere HA, set the restart priority policy for each NSX Global Manager appliance to medium.

  • NSX Global Manager implements the management plane for global segments and firewalls.

    NSX Global Manager is not required for control plane and data plane connectivity.

  • Setting the restart priority to medium reserves the high priority for services that impact the NSX control or data planes.

  • Management of NSX global components will be unavailable until the NSX Global Manager virtual machines restart.

  • The NSX Global Manager cluster is deployed in the management domain, where the total number of virtual machines is limited and where it competes with other management components for restart priority.

Table 17. Design Decisions on High Availability for NSX Global Manager for Multiple Availability Zones

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-FED-CFG-006

Add the NSX Global Manager appliances to the virtual machine group for the first availability zone.

Ensures that, by default, the NSX Global Manager appliances are powered on on a host in the primary availability zone.

None.

Table 18. Design Decisions on High Availability for NSX Global Manager for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-FED-CFG-007

Deploy an additional NSX Global Manager Cluster in the second VMware Cloud Foundation instance.

Enables recoverablity of NSX Global Manager in the second VMware Cloud Foundation instance if a failure in the first VMware Cloud Foundation instance occurs.

Requires additional NSX Global Manager nodes in the second VMware Cloud Foundation instance.

VCF-MGMT-NSX-FED-CFG-008

Set the NSX Global Manager cluster in the second VMware Cloud Foundation instance as standby for the management domain.

Enables recoverablity of NSX Global Manager in the second VMware Cloud Foundation instance if a failure in the first VMware Cloud Foundation instance occurs.

None.

NSX Global Manager Network Design

Table 19. Design Decisions on the Network Segment for NSX Global Manager for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Decision Implication

VCF-MGMT-NSX-FED-NET-001

Place the appliances of the NSX Global Manager cluster on the management VLAN network in each VMware Cloud Foundation instance.

Reduces the number of required VLANs because a single VLAN can be allocated to both vCenter Server and NSX-T Data Center.

None.

Table 20. Design Decisions on IP Addressing for NSX Global Manager for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-FED-NET-002

Allocate a statically assigned IP address and host name to the nodes of the NSX Global Manager cluster.

Ensures stability across the private cloud, makes it simpler to maintain and track, and to implement a DNS configuration.

Requires precise IP address management.

Table 21. Design Decisions on Name Resolution for NSX Global Manager for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-FED-NET-003

Configure forward and reverse DNS records for the nodes of the NSX Global Manager cluster for the management domain.

The NSX Global Manager nodes and VIP address are accessible by using fully qualified domain names instead of by using IP addresses only.

You must provide DNS records for the NSX Global Manager nodes for the management domain in each VMware Cloud Foundation instances.

Table 22. Design Decisions on Time Synchronization for NSX Global Manager for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-FED-NET-004

Configure NTP on each NSX Global Manager appliances.

NSX Global Manager depends on time synchronization across all private cloud components.

None.

NSX Edge Deployment Specification

Table 23. Design Decisions on the Form Factor and Sizing for the NSX Edge Nodes

Design ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-EDGE-CFG-001

Use large-size NSX Edge virtual appliances.

The large -size appliance provides the performance characteristics for supporting the SDDC management components in the management domain.

None.

Table 24. Design Decisions on the NSX Edge Cluster Configuration

Decision ID

Design Decision

Design Justification

Design Implications

VCF-MGMT-NSX-EDGE-CFG-002

Deploy the NSX Edge virtual appliances in the default management vSphere cluster, sharing the cluster between the management workloads and the edge appliances

  • Because of the prescriptive nature of the management domain, resource contention from unknown workloads is minimized

  • Simplifies the configuration and minimizes the number of ESXi hosts required for initial deployment.

  • Keeps the management components located in the same domain and cluster, isolated from tenant workloads.

.

None.

VCF-MGMT-NSX-EDGE-CFG-003

Deploy two NSX Edge appliances in an edge cluster in the default vSphere cluster in the management domain.

Creates the NSX Edge cluster for satisfying the requirements for availability and scale.

None.

VCF-MGMT-NSX-EDGE-CFG-004

Apply VM-VM anti-affinity rules for vSphere DRS to the virtual machines of the NSX Edge cluster.

Keeps the NSX Edge nodes running on different ESXi hosts for high availability.

None.

VCF-MGMT-NSX-EDGE-CFG-005

In vSphere HA, set the restart priority policy for each NSX Edge appliance to high.

  • The NSX Edge nodes are part of the north-south data path for overlay segments for management components. vSphere HA restarts the NSX Edge appliances first so that other virtual machines that are being powered on or migrated by using vSphere vMotion while the edge nodes are offline lose connectivity only for a short time.

  • Setting the restart priority to high reserves highest for future needs.

If the restart priority for another management appliance is set to highest, the connectivity delays for management appliances will be longer .

VCF-MGMT-NSX-EDGE-CFG-006

Configure all edge nodes as transport nodes.

Enables the participation of edge nodes in the overlay network for delivery of services to the SDDC management components such as routing and load balancing.

None.

VCF-MGMT-NSX-EDGE-CFG-007

Create an NSX Edge cluster with the default Bidirectional Forwarding Detection (BFD) configuration between the NSX Edge nodes in the cluster.

  • Satisfies the availability requirements by default.

  • Edge nodes must remain available to create services such as NAT, routing to physical networks, and load balancing.

None.

Table 25. Design Decisions on High Availability of the NSX Edge Nodes for a Single VMware Cloud Foundation Instance with Multiple Availability Zones

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-EDGE-CFG-008

Add the NSX Edge appliances to the virtual machine group for the first availability zone.

Ensures that, by default, the NSX Edge appliances are powered on upon a host in the primary availability zone.

None.

NSX Edge Network Design

Table 26. Design Decisions on the Network Configuration of the NSX Edge Appliances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-EDGE-NET-001

Connect the management interface eth0 of each NSX Edge node to the management VLAN.

Provides connection to the NSX Manager cluster.

None.

VCF-MGMT-NSX-EDGE-NET-002

  • Connect the fp-eth0 interface of each NSX Edge appliance to a VLAN trunk port group pinned to physical NIC 0 of the host, with the ability to failover to physical NIC 1.

  • Connect the fp-eth1 interface of each NSX Edge appliance to a VLAN trunk port group pinned to physical NIC 1 of the host, with the ability to failover to physical NIC 0.

  • Leave the fp-eth2 interface of each NSX Edge appliance unused.

  • Because VLAN trunk port groups pass traffic for all VLANs, VLAN tagging can occur in the NSX Edge node itself for easy post-deployment configuration.

  • By using two separate VLAN trunk port groups, you can direct traffic from the edge node to a particular host network interface and top of rack switch as needed.

  • In the event of failure of the top of rack switch, the VLAN trunk port group will failover to the other physical NIC and to ensure both fp-eth0 and fp-eth1 are available.

None.

VCF-MGMT-NSX-EDGE-NET-003

Use a single N-VDS in the NSX Edge nodes.

  • Simplifies deployment of the edge nodes.

  • The same N-VDS switch design can be used regardless of edge form factor.

  • Supports multiple TEP interfaces in the edge node.

  • vSphere Distributed Switch is not supported in the edge node.

None.

VCF-MGMT-NSX-EDGE-NET-004

Use a dedicated VLAN for edge overlay that is different from the host overlay VLAN.

A dedicated edge overlay network enables edge mobility in support of advanced deployments such as multiple availability zones or multi-rack clusters.

  • You must have routing between the VLANs for edge overlay and host overlay.

  • You must allocate another VLAN in the data center infrastructure for edge overlay.

Table 27. Design Decisions on the Network Configuration of the NSX Edge Appliances for an Environment with Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-EDGE-NET-005

Allocate a separate VLAN for edge RTEP overlay that is different from the edge overlay VLAN.

  • The RTEP network must be on a VLAN that is different from the edge overlay VLAN.
  • Dedicated VLAN for inter-site communication.

You must allocate another VLAN in the data center infrastructure.

NSX Edge Uplink Policy Design

Table 28. Design Decisions on the NSX Edge Uplink Policy

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-EDGE-NET-006

Create one uplink profile for the edge nodes with three teaming policies.

  • Default teaming policy of load balance source with both active uplinks uplink-1 and uplink-2.

  • Named teaming policy of failover order with a single active uplink uplink-1 without standby uplinks.

  • Named teaming policy of failover order with a single active uplink uplink-2 without standby uplinks.

  • An NSX Edge node that uses a single N-VDS can have only one uplink profile.

  • For increased resiliency and performance, supports the concurrent use of both edge uplinks through both physical NICs on the ESXi hosts.

  • The default teaming policy increases overlay performance and availability by using multiple TEPs, and balancing of overlay traffic.

  • By using named teaming policies, you can connect an edge uplink to a specific host uplink and from there to a specific top of rack switch in the data center.

  • Enables ECMP in each availability zone because the NSX Edge nodes can uplink to the physical network over two different VLANs.

You can use this policy only with ESXi hosts. Edge virtual machines must use the failover order teaming policy.

Life Cycle Management Design

Table 29. Design Decisions on Life Cycle Management of NSX-T Data Center

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-LCM-001

Use SDDC Manager to perform the life cycle management of NSX Manager and related components in the management domain.

Because the deployment scope of SDDC Manager covers the full SDDC stack, SDDC Manager performs patching, update, or upgrade of the management domain as a single process.

The operations team must understand and be aware of the impact of a patch, update, or upgrade operation by using SDDC Manager.

Table 30. Design Decisions on Life Cycle Management of NSX-T Data Center for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-LCM-FED-001

Use the upgrade coordinator in NSX-T Data Center to perform life cycle management on the NSX Global Manager appliances.

The version of SDDC Manager in this design is not currently capable of life cycle operations (patching, update, or upgrade) for NSX Global Manager.

You must always align the version of the NSX Global Manager nodes with the rest of the SDDC stack in VMware Cloud Foundation.

You must explicitly plan upgrades of the NSX Global Manager nodes. An upgrade of the NSX Global Manager nodes might require a cascading upgrade of the NSX Local Manager nodes and underlying SDDC Manager infrastructure prior to the upgrade of the NSX-T Global Manager nodes.

VCF-MGMT-NSX-LCM-FED-002

Establish an operations practice to ensure that prior to the upgrade of any workload domain, the impact of any version upgrades is evaluated regarding the need to upgrade NSX Global Manager.

The versions of NSX Global Manager and NSX Local Manager nodes must be compatible with each other.

Because the version of SDDC Manager in this design does not provide of life cycle operations (patching, update, or upgrade) for the NSX Global Manager nodes, upgrade to an unsupported version cannot be prevented.

The administrator must establish and follow an operational practice by using a runbook or automated process to ensure a fully supported and compliant bill of materials prior to any upgrade operation.

VCF-MGMT-NSX-LCM-FED-003

Establish an operations practice to ensure that prior to the upgrade of the NSX Global Manager, the impact of any version change is evaluated against the existing NSX Local Manager nodes and workload domains.

The versions of NSX Global Manager and NSX Local Manager nodes must be compatible with each other.

Because the version of SDDC Manager in this design does not provide of life cycle operations (patching, update, or upgrade) for the NSX Global Manager nodes, upgrade to an unsupported version cannot be prevented.

The administrator must establish and follow an operational practice by using a runbook or automated process to ensure a fully supported and compliant bill of materials prior to any upgrade operation.

Routing Design for a Single VMware Cloud Foundation Instance

Table 31. Design Decisions on the High Availability Mode of Tier-0 Gateways

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SDN-001

Deploy an active-active Tier-0 gateway.

Supports ECMP north-south routing on all Edge nodes in the NSX Edge cluster.

Active-active Tier-0 gateways cannot provide stateful services such as NAT.

Table 32. Design Decisions on Edge Uplink Configuration for North-South Routing

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SDN-002

To enable ECMP between the Tier-0 gateway and the Layer 3 devices (ToR switches or upstream devices), create two VLANs .

The ToR switches or upstream Layer 3 devices have an SVI on one of the two VLANS and each Edge node in the cluster has an interface on each VLAN.

Supports multiple equal-cost routes on the Tier-0 gateway and provides more resiliency and better bandwidth use in the network.

Additional VLANs are required.

VCF-MGMT-NSX-SDN-003

Assign a named teaming policy to the VLAN segments to the Layer 3 device pair.

Pins the VLAN traffic on each segment to its target edge node interface. From there the traffic is directed to the host physical NIC that is connected to the target top of rack switch.

None.

VCF-MGMT-NSX-SDN-004

Create a VLAN transport zone for edge uplink traffic.

Enables the configuration of VLAN segments on the N-VDS in the edge nodes.

Additional VLAN transport zones are required if the edge nodes are not connected to the same top of rack switch pair.
Table 33. Design Decisions on Dynamic Routing

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SDN-005

Use BGP as the dynamic routing protocol.

  • Enables the dynamic routing by using NSX-T Data Center.

  • BGP offers increased scale and flexibility.

  • BGP is a proven protocol that is designed for peering between networks under independent administrative control - data center networks and the NSX-T Data Center SDN.

  • SDDC architecture with multiple availability zones or multiple VMware Cloud Foundation instance architectures requires BGP.

In environments where BGP cannot be used, you must configure and manage static routes.

VCF-MGMT-NSX-SDN-006

Configure the BGP Keep Alive Timer to 4 and Hold Down Timer to 12 or lower between the top of tack switches and the Tier-0 gateway.

These timers must be aligned with the data center fabric design of your organization.

Provides a balance between failure detection between the top of rack switches and the Tier-0 gateway, and overburdening the top of rack switches with keep-alive traffic.

By using longer timers to detect if a router is not responding, the data about such a router remains in the routing table longer. As a result, the active router continues to send traffic to a router that is down.

VCF-MGMT-NSX-SDN-007

Do not enable Graceful Restart between BGP neighbors.

Avoids loss of traffic.

On the Tier-0 gateway, BGP peers from all the gateways are always active. On a failover, the Graceful Restart capability increases the time a remote neighbor takes to select an alternate Tier-0 gateway. As a result, BFD-based convergence is delayed.

None.

VCF-MGMT-NSX-SDN-008

Enable helper mode for Graceful Restart mode between BGP neighbors.

Avoids loss of traffic.

During a router restart, helper mode works with the graceful restart capability of upstream routers to maintain the forwarding table which in turn will forward packets to a down neighbor even after the BGP timers have expired causing loss of traffic.

None.

VCF-MGMT-NSX-SDN-009

Enable Inter-SR iBGP routing.

In the event that an edge node has all of its northbound eBGP sessions down, north-south traffic will continue to flow by routing traffic to a different edge node.

None.

Table 34. Design Decisions on the Tier-1 Gateway Configuration

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SDN-010

Deploy a Tier-1 gateway and connect it to the Tier-0 gateway.

Creates a two-tier routing architecture.

Abstracts the NSX logical components which interact with the physical data center from the logical components which provide SDN services.

A Tier-1 gateway can only be connected to a single Tier-0 gateway.

In cases where multiple Tier-0 gateways are required, you must create multiple Tier-1 gateways.

VCF-MGMT-NSX-SDN-011

Deploy a Tier-1 gateway to the NSX Edge cluster.

Enables stateful services, such as load balancers and NAT, for SDDC management components.

Because a Tier-1 gateway always works in active-standby mode, the gateway supports stateful services.

None.

VCF-MGMT-NSX-SDN-012

Deploy a Tier-1 gateway in non-preemptive failover mode.

Ensures that after a failed NSX Edge transport node is back online, it does not take over the gateway services thus causing a short service outage.

None.

VCF-MGMT-NSX-SDN-013

Enable standby relocation of the Tier-1 gateway.

Ensures that if an edge failure occurs, a standby Tier-1 gateway is created on another edge node.

None.

Table 35. Design Decisions on North-South Routing for Multiple Availability Zones

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SDN-014

Extend the uplink VLANs to the top of rack switches so that the VLANs are stretched between both availability zones.

Because the NSX Edge nodes will fail over between the availability zones, ensures uplink connectivity to the top of rack switches in both availability zones regardless of the zone the NSX Edge nodes are presently in.

You must configure a stretched Layer 2 network between the availability zones by using physical network infrastructure.

VCF-MGMT-NSX-SDN-015

Provide this SVI configuration on the top of the rack switches or upstream Layer 3 devices.

  • In the second availability zone, configure the top of rack switches or upstream Layer 3 devices with an SVI on each of the two uplink VLANs.

  • Make the top of rack switch SVI in both availability zones part of a common stretched Layer 2 network between the availability zones.

Enables the communication of the NSX Edge nodes to the top of rack switches in both availability zones over the same uplink VLANs.

You must configure a stretched Layer 2 network between the availability zones by using the physical network infrastructure.

VCF-MGMT-NSX-SDN-016

Provide this VLAN configuration.

  • Use two VLANs to enable ECMP between the Tier-0 gateway and the Layer 3 devices (top of rack switches or upstream devices).

  • The ToR switches or upstream Layer 3 devices have an SVI to one of the two VLANS and each NSX Edge node has an interface to each VLAN.

Supports multiple equal-cost routes on the Tier-0 gateway, and provides more resiliency and better bandwidth use in the network.

  • Extra VLANs are required.

  • Requires stretching uplink VLANs between availability zones

VCF-MGMT-NSX-SDN-017

Create an IP prefix list that permits access to route advertisement by any network instead of using the default IP prefix list.

Used in a route map to prepend a path to one or more autonomous system (AS-path prepend) for BGP neighbors in the second availability zone.

You must manually create an IP prefix list that is identical to the default one.

VCF-MGMT-NSX-SDN-018

Create a route map-out that contains the custom IP prefix list and an AS-path prepend value set to the Tier-0 local AS added twice.

  • Used for configuring neighbor relationships with the Layer 3 devices in the second availability zone.

  • Ensures that all ingress traffic passes through the first availability zone.

You must manually create the route map.

The two NSX Edge nodes will route north-south traffic through the second availability zone only if the connection to their BGP neighbors in the first availability zone is lost, for example, if a failure of the top of the rack switch pair or in the availability zone occurs.

VCF-MGMT-NSX-SDN-019

Create an IP prefix list that permits access to route advertisement by network 0.0.0.0/0 instead of using the default IP prefix list.

Used in a route map to configure local-reference on learned default-route for BGP neighbors in the second availability zone.

You must manually create an IP prefix list that is identical to the default one.

VCF-MGMT-NSX-SDN-020

Apply a route map-in that contains the IP prefix list for the default route 0.0.0.0/0 and assign a lower local-preference , for example, 80, to the learned default route and a lower local-preference, for example, 90 any routes learned.

  • Used for configuring neighbor relationships with the Layer 3 devices in the second availability zone.

  • Ensures that all egress traffic passes through the first availability zone.

You must manually create the route map.

The two NSX Edge nodes will route north-south traffic through the second availability zone only if the connection to their BGP neighbors in the first availability zone is lost, for example, if a failure of the top of the rack switch pair or in the availability zone occurs.

VCF-MGMT-NSX-SDN-021

Configure the neighbors of the second availability zone to use the route maps as In and Out filters respectively.

Makes the path in and out of the second availability zone less preferred because the AS path is longer. As a result, all traffic passes through the first zone.

The two NSX Edge nodes will route north-south traffic through the second availability zone only if the connection to their BGP neighbors in the first availability zone is lost, for example, if a failure of the top of the rack switch pair or in the availability zone occurs.

Routing Design for Multiple VMware Cloud Foundation Instances

Table 36. Design Decisions on the Tier-0 Gateway Configuration for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SDN-FED-001

Extend the management domain active-active Tier-0 gateway to the second VMware Cloud Foundation instance.

  • Supports ECMP north-south routing on all nodes in the NSX Edge cluster.

  • Enables support for cross-instance Tier-1 gateways and cross-instance network segments.

Active-active Tier-0 gateways cannot provide stateful services such as NAT.

VCF-MGMT-NSX-SDN-FED-002

Set the Tier-0 gateway as primary for all VMware Cloud Foundation instances.

  • In NSX Federation, a Tier-0 gateway lets egress traffic from connected Tier-1 gateways only in its primary locations.

  • Local ingress and egress traffic is controlled independently at the Tier-1 level. No segments are provisioned directly to the Tier-0 gateway.

  • A mixture of network spans (local to a VMware Cloud Foundation instance or spanning multiple instances) is enabled without requiring additional Tier-0 gateways and hence edge nodes.

  • If a failure in a VMware Cloud Foundation instance occurs, the local-instance networking in the other instance remains available without manual intervention.

None.

Table 37. Design Decisions on Dynamic Routing for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SDN-FED-003

From the global Tier-0 gateway, establish BGP neighbor peering to the ToR switches connected to the second VMware Cloud Foundation instance.

  • Enables the learning and advertising of routes in the second VMware Cloud Foundation instance.
  • Facilitates a potential automated failover of networks from the first to the second VMware Cloud Foundation instance.

None.

Table 38. Design Decisions on the Tier-1 Gateway Configuration for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SDN-FED-004

Use Tier-1 gateways to control the span of networks and ingress and egress traffic in the VMware Cloud Foundation instances.

Enables a mixture of network spans (isolated to a VMware Cloud Foundation instance or spanning multiple instances) without requiring additional Tier-0 gateways and hence edge nodes.

To control location span, a Tier-1 gateway must be assigned to an edge cluster and hence has the Tier-1 SR component. East-west traffic between Tier-1 gateways with SRs need to physically traverse an edge node.

VCF-MGMT-NSX-SDN-FED-005

Use a global cross-instance Tier-1 gateway and connect it to the Tier-0 gateway for cross-instance networking.

  • Enables network span between the VMware Cloud Foundation instances because NSX network segments follow the span of the gateway they are attached to.

  • Creates a two-tier routing architecture.

None.

VCF-MGMT-NSX-SDN-FED-006

Assign the NSX Edge cluster in each VMware Cloud Foundation instance to the global cross-instance Tier-1 gateway. Set the first VMware Cloud Foundation instance as primary and the second instance as secondary.

  • Enables cross-instance network span between the first and second VMware Cloud Foundation instances.
  • Enables deterministic ingress and egress traffic for the cross-instance network.
  • If a VMware Cloud Foundation instance failure occurs, enables deterministic failover of the Tier-1 traffic flow.
  • During the recovery of the inaccessibleVMware Cloud Foundation instance, enables deterministic failback of the Tier-1 traffic flow, preventing unintended asymmetrical routing.
  • Eliminates the need to use BGP attributes in the first and second VMware Cloud Foundation instances to influence location preference and failover.

You must manually fail over and fail back the cross-instance network from the standby NSX Global Manager.

VCF-MGMT-NSX-SDN-FED-007

Allocate a Tier-1 gateway in each instance for instance-specific networks and connect it to the cross-instance Tier-0 gateway.

  • Creates a two-tier routing architecture.
  • Enables local-instance networks that are not to span between the VMware Cloud Foundation instances.
  • Guarantees that local-instance networks remain available if a failure occurs in another VMware Cloud Foundation instance.

None.

VCF-MGMT-NSX-SDN-FED-008

Assign the NSX Edge cluster in each VMware Cloud Foundation instance to the instance-specific Tier-1 gateway for that VMware Cloud Foundation instance.

  • Enables instance-specific networks to be isolated to their specific instances.
  • Enables deterministic flow of ingress and egress traffic for the instance-specific networks.

You can use the service router that is created for the Tier-1 gateway for networking services. However, such configuration is not required for network connectivity.

VCF-MGMT-NSX-SDN-FED-009

Set each local-instance Tier-1 gateway only as primary in the home instance. Avoid setting the gateway as secondary in the other instances.

Prevents the need to use BGP attributes in primary and secondary instances to influence the instance ingress-egress preference.

None.

Load Balancing Design

Table 39. Design Decisions on Load Balancing

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SDN-022

Deploy a standalone Tier-1 gateway to support advanced stateful services such as load balancing for other management components.

Provides independence between north-south Tier-1 gateways to support advanced deployment scenarios.

You must add a separate Tier-1 gateway.

VCF-MGMT-NSX-SDN-023

Connect the standalone Tier-1 gateway to cross-instance NSX segments.

Provides load balancing to applications connected to the cross-instance network.

For information on the NSX segment configuration for vRealize Suite, see VMware Validated Design for vRealize Lifecycle and Access Management.

You must connect the gateway to each network that requires load balancing.

VCF-MGMT-NSX-SDN-024

Configure the standalone Tier-1 gateway with static routes to the gateways of the networks it is connected to.

Because the Tier-1 gateway is standalone, it does not autoconfigure its routes.

You must configure the gateway for each network that requires load balancing.

Table 40. Design Decisions on Load Balancing for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SDN-FED-010

Deploy a standalone Tier-1 gateway in the second VMware Cloud Foundation instance.

Provides a cold-standby non-global service router instance for the second VMware Cloud Foundation instance to support services on the cross-instance network which require advanced services not currently supported as NSX-T global objects.
  • You must add a separate Tier-1 gateway.

  • You must manually configure any services and synchronize them between the non-global service router instances in the first and second VMware Cloud Foundation instances.

  • To avoid a network conflict between the two VMware Cloud Foundation instances, make sure that primary and standby networking services are not both active at the same time.

VCF-MGMT-NSX-SDN-FED-011

Connect the standalone Tier-1 gateway in the second VMware Cloud Foundation instance to the cross-instance NSX segment.

Provides load balancing to applications connected to the cross-instance network in the second VMware Cloud Foundation instance.

You must connect the gateway to each network that requires load balancing.

VCF-MGMT-NSX-SDN-FED-012

Configure the standalone Tier-1 gateway in the second VMware Cloud Foundation instance with static routes to the gateways of the networks it is connected to.

Because the Tier-1 gateway is standalone, it does not autoconfigure its routes.

You must configure the gateway for each network that requires load balancing.

Table 41. Design Decisions on Load Balancer Operations for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SDN-FED-013

Establish an operational practice to reproduce any changes made on the network service configuration on the load balancer instance in the first VMware Cloud Foundation instance to the disconnected failover load balancer in the second instance.

Keeps the network service in the failover load balancer instance ready for activation if a failure in the first VMware Cloud Foundation instance occurs.

Because network services are not supported as global objects, you must configure them manually in each VMware Cloud Foundation instance. The load balancer service in one instance must be connected and active, while the service in the other instance must be disconnected and inactive.

  • Because of incorrect configuration between the VMware Cloud Foundation instances, the load balancer service in the second instance might come online with an invalid or incomplete configuration.

  • If both VMware Cloud Foundation instances are online and active at the same time, a conflict between services could occur resulting in a potential outage.

  • The administrator must establish and follow an operational practice by using a runbook or automated process to ensure that configuration changes are reproduced in each VMware Cloud Foundation instance.

VCF-MGMT-NSX-SDN-FED-014

Establish an operational practice to ensure that during failure of the first VMware Cloud Foundation instance, a service is manually brought online in the second VMware Cloud Foundation instance.

Provides support for the management applications that are failed over to the second VMware Cloud Foundation instance.

Because network services are not supported as global objects, you must configure them manually in each VMware Cloud Foundation instance. The load balancer service in one instance must be connected and active, while the service in the other instance must be disconnected and inactive.

The administrator must establish and follow an operational practice by using a runbook or automated process to ensure the correct services are brought online.

VCF-MGMT-NSX-SDN-FED-015

Establish an operational practice to manually bring offline the load balancer services in the first VMware Cloud Foundation instance after failover to the second instance and during the recovery of the first instance.

During the recovery of the first VMware Cloud Foundation instance, the service might come back online and cause a potential conflict with the active services still running on the first instance.

Because network services are not supported as global objects, you must manually fail over services in the recovery VMware Cloud Foundation instance. The load balancer service in one instance must be connected and active, while the service in the other instance must be disconnected and inactive.

  • The administrator must establish and follow an operational practice by using a runbook or automated process to ensure the correct services are brought offline as quickly as practical.

  • If both VMware Cloud Foundation instances are online and active at the same time, a conflict between the services might occur resulting in a potential outage.

Overlay Design

Table 42. Design Decisions on ESXi Host Transport Nodes

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SDN-025

Enable all ESXi hosts in the management domain as transport nodes in NSX-T Data Center.

Enables distributed routing, logical segments, and distributed firewall.

None.

VCF-MGMT-NSX-SDN-026

Configure each ESXi host as a transport node without using transport node profiles.

Enables the participation of ESXi hosts and the virtual machines on them in NSX overlay and VLAN networks.

Transport node profiles can only be applied at the cluster level. Because in an environment with multiple availability zones each availability zone is connected to a different set of VLANs, you cannot use a transport node profile.

You must configure each transport node with an uplink profile individually.

Table 43. Design Decisions on Host TEP Addressing
Decision ID

Design Decision

Design Justification

Design Implication

VCF-WLD-NSX-SDN-027

Use DHCP to assign IP addresses to the host TEP interfaces.

Required for deployments where a cluster spans Layer 3 network domains such as multiple availability zones and management clusters that span Layer 3 domains.

DHCP server is required for the host overlay VLANs.

Table 44. Design Decision on Virtual Switches for NSX-T Data Center

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SDN-028

Use the vSphere Distributed Switch for the default cluster in the management domain that is enabled for NSX-T Data Center.

  • Uses the existing vSphere Distributed Switch.

  • Provides NSX logical segment capabilities to support advanced use cases.

To use features such as distributed routing, management workloads must be connected to NSX segments.

Management occurs jointly from the vSphere Client to NSX Manager. However, you must perform all network monitoring in the NSX Manager user interface or another solution.

Table 45. Design Decisions on Geneve Overlay

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SDN-029

To provide virtualized network capabilities to management workloads, use overlay networks with NSX Edge nodes and distributed routing.

  • Creates isolated, multi-tenant broadcast domains across data center fabrics to deploy elastic, logical networks that span physical network boundaries.

  • Enables advanced deployment topologies by introducing Layer 2 abstraction from the data center networks.

Requires configuring transport networks with an MTU size of at least 1,600 bytes.

Table 46. Design Decision on the Transport Zone Configuration for NSX-T Data Center

Decision ID

Design Decision

Design Implication

Design Justification

VCF-MGMT-NSX-SDN-030

Create a single overlay transport zone for all overlay traffic across the management domain and NSX Edge nodes.

  • Ensures that overlay segments are connected to an NSX Edge node for services and north-south routing.

  • Ensures that all segments are available to all ESXi hosts and NSX Edge nodes configured as transport nodes.

None.

VCF-MGMT-NSX-SDN-031

Create a single VLAN transport zone for uplink VLAN traffic that is applied only to NSX Edge nodes.

Ensures that uplink VLAN segments are configured on the NSX Edge transport nodes.

If VLAN segments are needed on hosts, you must create another VLAN transport zone for the host transport nodes only.

Table 47. Design Decisions on the Uplink Profile for ESXi Transport Nodes

Decision ID

Design Decision

Decision Justification

Decision Implication

VCF-MGMT-NSX-SDN-032

Create an uplink profile with a load balance source teaming policy with two active uplinks for ESXi hosts.

For increased resiliency and performance, supports the concurrent use of both physical NICs on the ESXi hosts that are configured as transport nodes.

None.

Table 48. Design Decisions on Segment Replication Mode

Decision ID

Design Decision

Design Justification

Design Implications

VCF-MGMT-NSX-SDN-033

Use hierarchical two-tier replication on all overlay segments.

Hierarchical two-tier replication is more efficient because it reduced the number of ESXi hosts the source ESXi host must replicate traffic to.

None.

Information Security and Access Control Design

Table 49. Design Decisions on Certificate Management in NSX Manager

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SEC-001

Replace the default self-signed certificate of the NSX Manager instance for the management domain with a certificate that is signed by a third-party certificate authority.

Ensures that the communication between administrators and NSX Manager is encrypted by using a trusted certificate.

Replacing the default certificates with trusted CA-signed certificates from a certificate authority might increase the deployment preparation time because you must generate and submit certificates requests.

VCF-MGMT-NSX-SEC-002

Use a SHA-2 algorithm or stronger when signing certificates.

The SHA-1 algorithm is considered less secure and has been deprecated.

Not all certificate authorities support SHA-2.

Table 50. Design Decisions on Certificate Management in NSX Global Manager

Decision ID

Design Decision

Design Justification

Design Implication

VCF-MGMT-NSX-SEC-FED-001

Replace the default self-signed certificate of the NSX Global Manager instance for the management domain with a certificate that is signed by a third- party certificate authority.

Ensures that the communication between administrators and the NSX Global Manager instance is encrypted by using a trusted certificate.

Replacing the default certificates with trusted CA- signed certificates from a certificate authority might increase the deployment preparation time because you must generate and submit certificates requests.

VCF-MGMT-NSX-SEC-FED-002

Establish an operational practice to capture and update the thumbprint of the NSX Local Manager certificate on NSX Global Manager every time the certificate is updated by using SDDC Manager.

Ensures secured connectivity between the NSX Manager instances.

Each certificate has its own unique thumbprint. NSX Global Manager stores the unique thumbprint of the NSX Local Manager instances for enhanced security.

If an authentication failure between NSX Global Manager and NSX Local Manager occurs, objects that are created from NSX Global Manager will not be propagated on to the SDN.

The administrator must establish and follow an operational practice by using a runbook or automated process to ensure that the thumbprint up-to-date.