As part of the overlay design, you determine the NSX-T Data Center configuration for handling traffic between management workloads. You determine the configuration of vSphere Distributed Switch and virtual segments on it, and of the transport zones.

This conceptual design for NSX-T provides the network virtualization design of the logical components that handle the data to and from tenant workloads in the environment.

ESXi Host Transport Nodes

An NSX-T transport node is a node that is capable of participating in an NSX-T overlay network. The management domain contains multiple ESXi hosts in a vSphere cluster to support management workloads. You register these ESXi hosts as NSX-T transport nodes so that networks and workloads on that host can use the capabilities of NSX-T Data Center. During the preparation process, the native vSphere Distributed Switch for the management domain is extended with NSX-T capabilities.

Table 1. Design Decisions on ESXi Host Transport Nodes

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-MGMT-VI-SDN-060

Enable all ESXi hosts in the management domain as NSX-T transport nodes.

Enables distributed routing, logical segments, and distributed firewall.

None.

SDDC-MGMT-VI-SDN-061

Configure each ESXi host as a transport node without using transport node profiles.

Enables the participation of ESXi hosts and the virtual machines on them in NSX-T overlay and VLAN networks.

Transport node profiles can only be applied at the cluster level. Because in an environment with multiple availability zones each availability zone is connected to a different set of VLANs, you cannot use a transport node profile.

You must configure each transport node with an uplink profile individually.

Virtual Switches

NSX-T segments are logically abstracted segments to which you can connect tenant workloads. A single segment is mapped to a unique Geneve segment that is distributed across the ESXi hosts in a transport zone. The segment supports line-rate switching in the ESXi host without the constraints of VLAN sprawl or spanning tree issues.

Consider the following limitations of distributed switches:

  • Distributed switches are manageable only when the vCenter Server instance is available. You can consider vCenter Server a Tier-1 application.

  • Distributed switches with NSX-T capabilities are manageable only when the vCenter Server instance and NSX-T Manager cluster is available. You can Center Server and NSX-T Manager as Tier-1 applications.

  • N-VDS instances are manageable only when the NSX-T Manager cluster is available. You can consider the NSX-T Manager cluster as a Tier-1 application.

Table 2. Design Decision on Virtual Switches for NSX-T Data Center

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-MGMT-VI-SDN-062

Use a vSphere Distributed Switch for the first cluster in the management domain that is enabled for NSX-T Data Center.

  • Use the existing vSphere Distributed Switch.

  • Provides NSX-T logical segment capabilities to support advanced use cases.

To use features such as distributed routing, tenant workloads must be connected to NSX-T segments.

Management occurs jointly from the vSphere Client to NSX-T Manager. However, you must perform all network monitoring in the NSX-T Manager user interface or another solution.

Configuration of the vSphere Distributed Switch with NSX-T

The first cluster in the management domain uses a single vSphere Distributed Switch with a configuration for system traffic types, NIC teaming, and MTU size. See vSphere Networking Design for the Management Domain.

To support traffic uplink and overlay traffic for the NSX-T Edge nodes for the management domain, you must create several port groups on the vSphere Distributed Switch for the management domain. The VMkernel adapter for the Host TEP is connected to the host overlay VLAN, but does not require a dedicated port group on the distributed switch. The VMkernel network adapter for Host TEP is automatically created when you configure the ESXi host as a transport node.

NSX-T Edge appliances and the VMkernel adapter for the Host TEP be connected to different VLANs and subnets. The VLAN IDs for the NSX-T Edge nodes are mapped to the VLAN trunk port groups sfo01-m01-cl01-vds01-pg-uplink01 and sfo01-m01-cl01-vds01-pg-uplink02 on the host.

Table 3. vSphere Distributed Switch Configuration for the Management Domain

Switch Name

Type

Function

Number of Physical NIC Ports

Teaming Policy

MTU

sfo-m01-cl01-vds01

vSphere Distributed Switch 7.0

  • ESXi Management

  • vSphere vMotion

  • vSAN

  • NFS

  • Host Overlay

  • Edge Uplinks and Overlay - VLAN Trunking

2

  • Load balance source for the ESXi traffic

  • Failover order for the edge uplinks

9000

Table 4. sfo-m01-cl01-vds01 Switch Configuration Per Physical NIC

vmnic

Function

Connected to

0

Uplink

Top of rack switch 1

1

Uplink

Top of rack switch 2

Table 5. Segments on sfo-m01-cl01-vds01 in a Single Availability Zone

Segment Name

Type

Purpose

sfo01-m01-cl01-vds01-pg-mgmt

VLAN

Management traffic

sfo01-m01-cl01-vds01-pg-vmotion

VLAN

vSphere vMotion traffic

sfo01-m01-cl01-vds01-pg-vsan

VLAN

vSAN traffic

sfo-m01-cl01-vds01-pg-uplink01

VLAN Trunk

Edge node overlay and uplink traffic to the first top of rack switch

sfo-m01-cl01-vds01-pg-uplink02

VLAN Trunk

Edge node overlay and uplink traffic to the second top of rack switch

sfo-m01-cl01-vds01-pg-nfs

VLAN

NFS traffic

auto created (Host TEP)

-

Host overlay

auto created (Host TEP)

-

Host overlay

auto created (Hyperbus)

-

-

In a multi-region deployment, you must provide new or extend existing networks.

Table 6. Segments on sfo-m01-cl01-vds01 for a Second Availability Zone

Segment Name

Type

Availability Zone

Purpose

az2_sfo01-m01-cl01-vds01-pg-mgmt

VLAN

Availability Zone 2

Management traffic in Availability Zone 2

az2_sfo01-m01-cl01-vds01-pg-vmotion

VLAN

Availability Zone 2

vSphere vMotion traffic in Availability Zone 2

az2_sfo01-m01-cl01-vds01-pg-vsan

VLAN

Availability Zone 2

vSAN traffic in Availability Zone 2

sfo-m01-cl01-vds01-pg-uplink01

VLAN Trunk

Stretched between Availability Zone 1 and Availability Zone 2

Edge node overlay and uplink traffic to the first top of rack switch in Availability Zone 2

sfo-m01-cl01-vds01-pg-uplink02

VLAN

Trunk

Stretched between Availability Zone 1 and Availability Zone 2

Edge node overlay and uplink to the second top of rack switch in Availability Zone 2

az2_sfo01-m01-vds01-pg-nfs

VLAN

Availability Zone 2

-

auto created (Host TEP)

-

-

Host overlay

auto created (Host TEP)

-

-

Host overlay

auto created (Hyperbus)

-

-

-

Virtual Segments

Geneve provides the overlay capability in NSX-T to create isolated, multi-tenant broadcast domains across data center fabrics, and enables customers to create elastic, logical networks that span physical network boundaries.

The first step in creating these logical networks is to isolate and pool the networking resources. By using the Geneve overlay, NSX-T isolates the network into a pool of capacity and separates the consumption of these services from the underlying physical infrastructure. This model is similar to the model vSphere uses to abstract compute capacity from the server hardware to create virtual pools of resources that can be consumed as a service. You can then organize the pool of network capacity in logical networks that are directly attached to specific applications.

Geneve is a tunneling mechanism which provides extensibility while still using the offload capabilities of NICs for performance improvement.

Geneve works by creating Layer 2 logical networks that are encapsulated in UDP packets. A Segment ID in every frame identifies the Geneve logical networks without the need for VLAN tags. As a result, many isolated Layer 2 networks can coexist on a common Layer 3 infrastructure using the same VLAN ID.

In the vSphere architecture, the encapsulation is performed between the NIC of the virtual machine and the logical port on the virtual switch, making the Geneve overlay transparent to both the guest virtual machines and the underlying Layer 3 network. The Tier-0 Gateway performs gateway services between overlay and non-overlay hosts, for example, a physical server or the Internet router. The NSX-T Edge node translates overlay segment IDs to VLAN IDs, so that non-overlay hosts can communicate with virtual machines on an overlay network.

The edge cluster hosts all NSX-T Edge node instances that connect to the corporate network for secure and centralized network administration.

Table 7. Design Decisions on Geneve Overlay

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-MGMT-VI-SDN-063

To provide virtualized network capabilities to management workloads, use overlay networks with NSX-T Edge nodes and distributed routing.

  • Creates isolated, multi-tenant broadcast domains across data center fabrics to deploy elastic, logical networks that span physical network boundaries.

  • Enables advanced deployment topologies by introducing Layer 2 abstraction from the data center networks.

Requires configuring transport networks with an MTU size of at least 1,600 bytes.

Transport Zones

Transport zones determine which hosts can participate in the use of a particular network. A transport zone identifies the type of traffic, VLAN or overlay, and the vSphere Distributed Switch name. You can configure one or more VLAN transport zones and a single overlay transport zone per virtual switch. A transport zone does not represent a security boundary.
Figure 1. Transport Zone Design
Table 8. Design Decision on the Transport Zone Configuration for NSX-T Data Center

Decision ID

Design Decision

Design Implication

Design Justification

SDDC-MGMT-VI-SDN-064

Create a single overlay transport zone for all overlay traffic across the management domain and NSX-T Edge nodes.
  • Ensures that overlay segments are connected to an NSX-T Edge node for services and north-south routing.

  • Ensures that all segments are available to all ESXi hosts and NSX-T Edge nodes configured as transport nodes.

None.

SDDC-MGMT-VI-SDN-065

Create a single VLAN transport zone for uplink VLAN traffic that is applied only to NSX-T Edge nodes.

Ensures that uplink VLAN segments are configured on the NSX-T Edge transport nodes.

If VLAN segments are needed on hosts, you must create another VLAN transport zone for the host transport nodes only.

Uplink Policy for ESXi Host Transport Nodes

Uplink profiles define policies for the links from ESXi hosts to NSX-T segments or from NSX-T Edge appliances to top of rack switches. By using uplink profiles, you can apply consistent configuration of capabilities for network adapters across multiple ESXi hosts or NSX-T Edge nodes.

Uplink profiles can use either load balance source or failover order teaming. If using load balance source, multiple uplinks can be active. If using failover order, only a single uplink can be active.

Table 9. Design Decisions on the Uplink Profile for ESXi Transport Nodes

Decision ID

Design Decision

Decision Justification

Decision Implication

SDDC-MGMT-VI-SDN-066

Create an uplink profile with the load balance source teaming policy with two active uplinks for ESXi hosts.

For increased resiliency and performance, supports the concurrent use of both physical NICs on the ESXi hosts that are configured as transport nodes.

None.

Replication Mode of Segments

The control plane decouples NSX-T Data Center from the physical network. The control plane handles the broadcast, unknown unicast, and multicast (BUM) traffic in the virtual segments.

The following options are available for BUM replication on segments.

Table 10. BUM Replication Modes of NSX-T Segments

BUM Replication Mode

Description

Hierarchical Two-Tier

The ESXi host transport nodes are grouped according to their TEP IP subnet. One ESXi host in each subnet is responsible for replication to an ESXi host in another subnet. The receiving ESXi host replicates the traffic to the ESXi hosts in its local subnet.

The source ESXi host transport node knows about the groups based on information it has received from the NSX-T Controller. The system can select an arbitrary ESXi host transport node as the mediator for the source subnet if the remote mediator ESXi host node is available.

Head-End

In this mode, the ESXi host transport node at the origin of the frame to be flooded on a segment sends a copy to every other ESXi host transport node that is connected to this segment.

Table 11. Design Decisions on Segment Replication Mode

Decision ID

Design Decision

Design Justification

Design Implications

SDDC-MGMT-VI-SDN-067

Use hierarchical two-tier replication on all segments.

Hierarchical two-tier replication is more efficient by reducing the number of ESXi hosts the source ESXi host must replicate traffic to.

None.