NSX-T Data Center provides networking services to the management workloads in VMware Cloud Foundation such as load balancing, routing and virtual networking.

Figure 1. NSX-T Data Center Logical Design for an Environment with a Single VMware Cloud Foundation Instance

The NSX Manager three-node cluster is the central component of the logical design. It is connected to the NSX Edge two-node cluster and to the ESXi host transport nodes. For access to the management host, NSX Manager is connected to the management domain vCenter Server. Users access NSX Manager by using user interface or API.
Figure 2. NSX-T Data Center Logical Design for an Environment with Multiple VMware Cloud Foundation Instances
The NSX Global Manager and NSX Local Manager three-node clusters in each VMware Cloud Foundation instance are the central components of the logical design. The NSX Global Manager cluster in VMware Cloud Foundation instance A is active, and the NSX Global Manager cluster in VMware Cloud Foundation instance B is in standby mode. In each VMware Cloud Foundation instance, the NSX Local Manager instance is connected to the NSX Edge two-node cluster and to the ESXi host transport nodes. For access to the management hosts in each VMware Cloud Foundation instance, the NSX Local Manager is connected to the management domain vCenter Server. Users access NSX Manager by using user interface or API.

An NSX-T Data Center deployment consists of these components:

  • Unified appliances that have both the NSX Local Manager and NSX Controller roles. They provide management and control plane capabilities.

  • NSX Edge nodes that provide advanced services such as load balancing, and north-south connectivity.

  • The ESXi hosts within the management domain are registered as NSX transport nodes to provide distributed routing and firewall services to management workloads.

To support the requirements for NSX Federation with multiple VMware Cloud Foundation instances, you add the following components:

  • NSX Global Manager cluster in each of the first two VMware Cloud Foundation instances.

    You deploy the NSX Global Manager cluster in each VMware Cloud Foundation instance so that you can use NSX Federation for global management of networking and security services.

  • An additional infrastructure VLAN in each VMware Cloud Foundation Instance to carry VMware Cloud Foundation instance-to-instance traffic.

Table 1. NSX-T Data Center Logical Components

Component

Single VMware Cloud Foundation Instance with a Single Availability Zone

Single VMware Cloud Foundation Instance with Multiple Availability Zones

Multiple VMware Cloud Foundation Instances

NSX Manager Cluster

  • Three medium-size NSX Local Manager nodes with an internal virtual IP (VIP) address for high availability

  • vSphere HA protects the NSX Manager cluster nodes applying high restart priority

  • vSphere DRS rule keeps the NSX Manager nodes running on different hosts

  • Three medium-size NSX Local Manager nodes in the first availability zone with an internal VIP address for high availability

  • vSphere should-run DRS rule keeps the NSX Manager nodes running in the first availability zones. Failover to the second availability zone occurs only if a failure in the first zone occurs.

  • In the availability zone, vSphere HA protects the cluster nodes applying high restart priority

  • In the availability zone, vSphere DRS rule keeps the nodes running on different hosts

In the first VMware Cloud Foundation instance:

  • Three medium-size NSX Global Manager nodes with an internal VIP address for high availability.

    The NSX Global Manager cluster is set as Active.

In the second VMware Cloud Foundation instance:

  • Three medium-size NSX Global Manager nodes with an internal VIP address for high availability.

    The NSX Global Manager cluster is set as Standby

In each VMware Cloud Foundation instance:

  • Three medium-size NSX Local Manager nodes with an internal virtual IP (VIP) address for high availability

  • vSphere HA protects the nodes of both the NSX Global Manager and NSX Local Manager clusters, applying high restart priority
  • vSphere DRS rule keeps the nodes of both the NSX Global Manager and NSX Local Manager clusters running on different hosts

NSX Edge Cluster

  • Two medium-size NSX Edge nodes

  • vSphere HA protects the NSX Edge nodes applying high restart priority

  • vSphere DRS rule keeps the NSX Edge nodes running on different hosts

  • Two medium-size NSX Edge nodes in the first availability zone

  • vSphere should-run DRS rule keeps the NSX Edge nodes running in the first availability zone. Failover to the second availability zone occurs only if a failure in the first zone occurs.

  • In the availability zone, vSphere HA protects the NSX Edge nodes applying high restart priority

  • In the availability zone, vSphere DRS rule keeps the NSX Edge nodes running on different hosts

In each VMware Cloud Foundation instance:

  • Two medium-size NSX Edge nodes

  • vSphere HA protects the NSX Edge nodes applying high restart priority

  • vSphere DRS rule keeps the NSX Edge nodes running on different hosts

Transport Nodes

  • Four ESXi host transport nodes

  • Two Edge transport nodes

  • In each availability zone, four ESXi host transport nodes

  • Two Edge transport nodes in the first availability zone.

In each VMware Cloud Foundation instance:

  • Four ESXi host transport nodes

  • Two Edge transport nodes

Transport Zones

  • One VLAN transport zone for NSX Edge uplink traffic

  • One overlay transport zone for SDDC management components and NSX Edge nodes

  • One VLAN transport zone for NSX Edge uplink traffic

  • One overlay transport zone for SDDC management components and NSX Edge nodes

In each VMware Cloud Foundation instance:

  • One VLAN transport zone for NSX Edge uplink traffic

  • One overlay transport zone for SDDC management components and NSX Edge nodes

VLANs and IP Subnets Allocated to NSX-T Data Center

For information about the networks for virtual infrastructure management, see Distributed Port Group and VMkernel Adapter Design for the Management Domain.

  • Host Overlay

  • Uplink01

  • Uplink02

  • Edge Overlay

See VLANs and Subnets for a Single VMware Cloud Foundation Instance with a Single Availability Zone.

Networks for the first availability zone:

  • Host Overlay

  • Uplink01 (stretched)

  • Uplink02 (stretched)

  • Edge Overlay (stretched)

Networks for the second availability zone:

  • Host Overlay

See Networking for a Single VMware Cloud Foundation Instance with Multiple Availability Zones.

In each VMware Cloud Foundation instance in an SDDC with two or more VMware Cloud Foundation Instances:

  • Host Overlay

  • Uplink01

  • Uplink02

  • Edge Overlay

  • For a configuration with a single availability zone in each VMware Cloud Foundation instance:

    Edge RTEP

  • For a configuration with multiple availability zones in a VMware Cloud Foundation instance:

    Edge RTEP (stretched)

Routing Configuration

BGP

BGP with ingress and egress traffic to the first availability zone with limited exceptions.

BGP