NSX-T Data Center provides networking services to customer workloads in VMware Cloud Foundation such as load balancing, routing, and virtual networking.

Figure 1. NSX-T Data Center Logical Design for an Environment with a Single VMware Cloud Foundation Instance

The NSX Manager three-node cluster is connected to the NSX Edge two-node cluster, the ESXi host transport nodes and the VI workload domain vCenter Server.
Figure 2. NSX-T Data Center Logical Design for an Environment with Multiple VMware Cloud Foundation Instances
The Global Manager and Local Manager three-node clusters in each VCF instance. The Global Manager cluster in instance A is active, and the Global Manager cluster in instance B is standby. Each Local Manager cluster is connected to the edge cluster.

An NSX-T Data Center deployment consists of these components:

  • Unified appliances that have both the NSX Local Manager and NSX Controller roles. They provide management and control plane capabilities.

  • NSX Edge nodes that provide advanced services such as load balancing, and north-south connectivity.

  • The ESXi hosts within the VI workload domain are registered as NSX transport nodes to provide distributed routing and firewall services to customer workloads.

To support the requirements for NSX Federation with multiple VMware Cloud Foundation instances, you add the following components:

  • NSX Global Manager cluster in each of the first two VMware Cloud Foundation instances.

    You deploy the NSX Global Manager cluster in each VMware Cloud Foundation instance so that you can use NSX Federation for global management of networking and security services.

  • An additional infrastructure VLAN in each VMware Cloud Foundation Instance to carry VMware Cloud Foundation instance-to-instance traffic.

Table 1. NSX-T Data Center Logical Components

Component

Single VMware Cloud Foundation Instance with a Single Availability Zone

Single VMware Cloud Foundation Instance with Multiple Availability Zones

Multiple VMware Cloud Foundation Instances

NSX Manager Cluster

  • Deployed in the default cluster of the management domain

  • Three large-size NSX Local Manager nodes with an internal virtual IP (VIP) address for high availability

  • vSphere HA protects the NSX Manager cluster nodes applying high restart priority

  • vSphere DRS rule keeps the NSX Manager nodes running on different hosts

  • Deployed in the first availability zone of the default management cluster

  • Three large-size NSX Local Manager nodes with an internal VIP address for high availability

  • vSphere should-run DRS rule keeps the NSX Manager nodes running in the first availability zones. Failover to the second availability zone occurs only if a failure in the first zone occurs.

  • In the availability zone, vSphere HA protects the cluster nodes applying high restart priority

  • In the availability zone, vSphere DRS rule keeps the nodes running on different hosts

In the first VMware Cloud Foundation instance:

  • Three medium-size NSX Global Manager nodes in the management domain with an internal VIP address for high availability.

    The NSX Global Manager cluster is set as Active.

In the second VMware Cloud Foundation instance:

  • Three medium-size NSX Global Manager nodes in the management domain with an internal VIP address for high availability.

    The NSX Global Manager cluster is set as Standby

In each VMware Cloud Foundation instance:

  • Three large-size NSX Local Manager nodes in the management domain with an internal virtual IP (VIP) address for high availability

  • vSphere HA protects the nodes of both the NSX Global Manager and NSX Local Manager clusters, applying high restart priority
  • vSphere DRS rule keeps the nodes of both the NSX Global Manager and NSX Local Manager clusters running on different hosts

NSX Edge Cluster

  • Deployed in a shared edge and workload cluster in the VI workload domain

  • Two large-size NSX Edge nodes

  • vSphere HA protects the NSX Edge nodes applying high restart priority

  • vSphere DRS rule keeps the NSX Edge nodes running on different hosts

  • Deployed in the first availability zone in a shared edge and workload cluster in the VI workload domain

  • Two large-size NSX Edge nodes

  • vSphere should-run DRS rule keeps the NSX Edge nodes running in the first availability zone. Failover to the second availability zone occurs only if a failure in the first zone occurs.

  • In the availability zone, vSphere HA protects the NSX Edge nodes applying high restart priority

  • In the availability zone, vSphere DRS rule keeps the NSX Edge nodes running on different hosts

In each VMware Cloud Foundation instance:

  • Two large-size NSX Edge nodes in a shared edge and compute cluster in the VI workload domain

  • vSphere HA protects the NSX Edge nodes applying high restart priority

  • vSphere DRS rule keeps the NSX Edge nodes running on different hosts

Transport Nodes

  • Four ESXi host transport nodes

  • Two Edge transport nodes

  • In each availability zone, four ESXi host transport nodes

  • Two Edge transport nodes in the first availability zone.

In each VMware Cloud Foundation instance:

  • Four ESXi host transport nodes

  • Two Edge transport nodes

Transport Zones

  • One VLAN transport zone for NSX Edge uplink traffic

  • One overlay transport zone for customer workloads and NSX Edge nodes

  • One VLAN transport zone for NSX Edge uplink traffic

  • One overlay transport zone for customer workloads and NSX Edge nodes

In each VMware Cloud Foundation instance:

  • One VLAN transport zone for NSX Edge uplink traffic

  • One overlay transport zone for customer workloads and NSX Edge nodes

VLANs and IP Subnets Allocated to NSX-T Data Center

For information about the networks for virtual infrastructure management, see vSphere Networking Design for a Virtual Infrastructure Workload Domain.

  • Host Overlay

  • Uplink01

  • Uplink02

  • Edge Overlay

See VLANs and Subnets in a Single VMware Cloud Foundation Instance with a Single Availability Zone.

Networks for the first availability zone:

  • Host Overlay

  • Uplink01 (stretched)

  • Uplink02 (stretched)

  • Edge Overlay (stretched)

Networks for the second availability zone:

  • Host Overlay

See VLANs and Subnets in a Single VMware Cloud Foundation Instance with Multiple Availability Zones.

In each VMware Cloud Foundation Instance in an SDDC with two or more VMware Cloud Foundation instances:

  • Host Overlay

  • Uplink01

  • Uplink02

  • Edge Overlay

  • For a configuration with a single availability zone in each VMware Cloud Foundation instance:

    Edge RTEP

  • For a configuration with multiple availability zones in a VMware Cloud Foundation instance:

    Edge RTEP (stretched)

Routing Configuration

BGP

BGP with ingress and egress traffic to the first availability zone with limited exceptions.

BGP

Logical Design for Multiple VI Workload Domains

When deploying another VI workload domain in a VMware Cloud Foundation instance, the additional domain can have its own dedicated NSX Manager instance (one NSX Manager to one VI workload domain) or share an NSX Manager instance already deployed for another VI workload domain (one NSX Manager to many VI workload domains).

You set a deployed NSX Manager as shared when you create the other VI workload domain. After you associate a VI workload domain with an NSX Manager instance, the domain cannot be reassigned another NSX Manger instance unless you delete and the domain again.

An NSX Manager instance can not be shared between the management domain and a VI workload domain.

Table 2. Dedicated or Shared NSX Manager in a VI Workload Domain

Feature

Dedicated NSX Manager Instance

Shared NSX Manager Instance

Overlay network isolation

Each VI workload domain has its own overlay transport zone. As a result, overlay-backed segments are isolated to a single workload domain.

Each VI workload domain requires its own NSX Edge cluster which integrates with the data network fabric.

All VI workload domains connected to the shared NSX Manager instance share a single overlay transport zone. As a result, all overlay segments are created in all workload domains.

You can use one NSX Edge cluster for all VI workload domains that are connected to the NSX Manager instance or multiple edge clusters for traffic separation if needed.

Security policy

Each VI workload domain has it own security policy. You can use tags for objects only in this VI domain.

All VI workload domains can share a common security policy.

You can use tags for objects in any of the VI workload domains that are connected to the NSX Manager instance

BOM and life cycle management

You can manage the life cycle of the components of the VI workload domain individually.

All VI workload domains under a shared NSX Manager instance must have a consistent BOM. As a result, you must perform life cycle management operations in all domains together.

All clusters under a shared NSX Manager instance must use the same ESXi life cycle method - either vSphere Lifecycle Manager baselines and baseline groups (formerly known as vSphere Update Manager) or vSphere Lifecycle Manager images. See Life Cycle Management Design for ESXi for a Virtual Infrastructure Workload Domain.

NSX Manager scale

Enables greater scale of NSX-T Date Center because each VI workload domain has its own NSX Manager instance and associated scale limits.

All VI workload domains under a shared NSX Manager instance must fit within the scale limits of this NSX Manager instance.

NSX Federation scale

NSX Federation supports a limited number of NSX Local Managers in an NSX Federation. If many VI workloads are expected, you must consider the total number of NSX Manager instances.