As part of the overlay design, you determine the NSX configuration for handling traffic between workloads, management or customer, in VMware Cloud Foundation. You determine the NSX segments and the transport zones.

Logical Overlay Design for VMware Cloud Foundation

This conceptual design provides the network virtualization design of the logical components that handle the data to and from the workloads in the environment. For an environment with multiple VMware Cloud Foundation instances, you replicate the design of the first VMware Cloud Foundation instance to the additional VMware Cloud Foundation instances.

ESXi Host Transport Nodes

A transport node in NSX is a node that is capable of participating in an NSX data plane. The workload domains contain multiple ESXi hosts in a vSphere cluster to support management or customer workloads. You register these ESXi hosts as transport nodes so that networks and workloads on that host can use the capabilities of NSX. During the preparation process, the native vSphere Distributed Switch for the workload domain is extended with NSX capabilities.

Virtual Segments

Geneve provides the overlay capability to create isolated, multi-tenant broadcast domains in NSX across data center fabrics, and enables customers to create elastic, logical networks that span physical network boundaries, and physical locations.

Transport Zones

A transport zone identifies the type of traffic, VLAN or overlay, and the vSphere Distributed Switch name. You can configure one or more VLAN transport zones and a single overlay transport zone per virtual switch. A transport zone does not represent a security boundary. VMware Cloud Foundation supports a single overlay transport zone per NSX Instance. All vSphere clusters, within and across workload domains that share the same NSX instance subsequently share the same overlay transport zone.

Figure 1. Transport Zone Design
The workload domain host transport nodes and edge transport nodes are connected to one overlay transport zone. The edge transport nodes are connected to a separate VLAN transport zone for north/south traffic. Optionally the host transport nodes can be connected to one or more VLAN transport zones for VLAN backed networks.

Uplink Policy for ESXi Host Transport Nodes

Uplink profiles define policies for the links from ESXi hosts to NSX segments or from NSX Edge appliances to top of rack switches. By using uplink profiles, you can apply consistent configuration of capabilities for network adapters across multiple ESXi hosts or NSX Edge nodes.

Uplink profiles can use either load balance source or failover order teaming. If using load balance source, multiple uplinks can be active. If using failover order, only a single uplink can be active.

Replication Mode of Segments

The control plane decouples NSX from the physical network. The control plane handles the broadcast, unknown unicast, and multicast (BUM) traffic in the virtual segments.

The following options are available for BUM replication on segments.

Table 1. BUM Replication Modes of NSX Segments

BUM Replication Mode

Description

Hierarchical Two-Tier

The ESXi host transport nodes are grouped according to their TEP IP subnet. One ESXi host in each subnet is responsible for replication to an ESXi host in another subnet. The receiving ESXi host replicates the traffic to the ESXi hosts in its local subnet.

The source ESXi host transport node knows about the groups based on information it has received from the control plane. The system can select an arbitrary ESXi host transport node as the mediator for the source subnet if the remote mediator ESXi host node is available.

Head-End

The ESXi host transport node at the origin of the frame to be flooded on a segment sends a copy to every other ESXi host transport node that is connected to this segment.

Overlay Design Requirements and Recommendations for VMware Cloud Foundation

Consider the requirements for the configuration of the ESXi hosts in a workload domain as NSX transport nodes, transport zone layout, uplink teaming policies, and the best practices for IP allocation and BUM replication mode in a VMware Cloud Foundation deployment.

Overlay Design Requirements

You must meet the following design requirements in your overlay design.

Table 2. Overlay Design Requirements for VMware Cloud Foundation

Requirement ID

Design Requirement

Justification

Implication

VCF-NSX-OVERLAY-REQD-CFG-001

Configure all ESXi hosts in the workload domain as transport nodes in NSX.

Enables distributed routing, logical segments, and distributed firewall.

None.

VCF-NSX-OVERLAY-REQD-CFG-002

Configure each ESXi host as a transport node using transport node profiles.

  • Enables the participation of ESXi hosts and the virtual machines running on them in NSX overlay and VLAN networks.

  • Transport node profiles can only be applied at the cluster level.

None.

VCF-NSX-OVERLAY-REQD-CFG-003

To provide virtualized network capabilities to workloads, use overlay networks with NSX Edge nodes and distributed routing.

  • Creates isolated, multi-tenant broadcast domains across data center fabrics to deploy elastic, logical networks that span physical network boundaries.

  • Enables advanced deployment topologies by introducing Layer 2 abstraction from the data center networks.

Requires configuring transport networks with an MTU size of at least 1,600 bytes.

VCF-NSX-OVERLAY-REQD-CFG-004

Create a single overlay transport zone in the NSX instance for all overlay traffic across the host and NSX Edge transport nodes of the workload domain.

  • Ensures that overlay segments are connected to an NSX Edge node for services and north-south routing.

  • Ensures that all segments are available to all ESXi hosts and NSX Edge nodes configured as transport nodes.

All clusters in all workload domains that share the same NSX Manager share the same transport zone.

VCF-NSX-OVERLAY-REQD-CFG-005

Create an uplink profile with a load balance source teaming policy with two active uplinks for ESXi hosts.

For increased resiliency and performance, supports the concurrent use of both physical NICs on the ESXi hosts that are configured as transport nodes.

None.

Overlay Design Recommendations

In your overlay design for VMware Cloud Foundation, you can apply certain best practices.

Table 3. Overlay Design Recommendations for VMware Cloud Foundation

Recommendation ID

Design Recommendation

Justification

Implication

VCF-NSX-OVERLAY-RCMD-CFG-001

Use static IP pools to assign IP addresses to the host TEP interfaces.

  • Removes the need for an external DHCP server for the host overlay VLANs.

  • You can use NSX Manager to verify static IP pool configurations.

None.

VCF-NSX-OVERLAY-RCMD-CFG-002

Use hierarchical two-tier replication on all overlay segments.

Hierarchical two-tier replication is more efficient because it reduced the number of ESXi hosts the source ESXi host must replicate traffic to if hosts have different TEP subnets. This is typically the case with more than one cluster and will improve performance in that scenario.

None.

Table 4. Overlay Design Recommendations for Stretched Clusters inVMware Cloud Foundation

Recommendation ID

Design Recommendation

Justification

Implication

VCF-NSX-OVERLAY-RCMD-CFG-003

Configure an NSX sub-transport node profile.

  • You can use static IP pools for the host TEPs in each availability zone.

  • The NSX transport node profile can remain attached when using two separate VLANs for host TEPs at each availability zone as required for clusters that are based on vSphere Lifecycle Manager images.

  • Using an external DHCP server for the host overlay VLANs in both availability zones is not required.

Changes to the host transport node configuration are done at the vSphere cluster level.