As part of the overlay design, you determine the NSX configuration for handling traffic between workloads, management or customer, in VMware Cloud Foundation. You determine the NSX segments and the transport zones.
Logical Overlay Design for VMware Cloud Foundation
This conceptual design provides the network virtualization design of the logical components that handle the data to and from the workloads in the environment. For an environment with multiple VMware Cloud Foundation instances, you replicate the design of the first VMware Cloud Foundation instance to the additional VMware Cloud Foundation instances.
ESXi Host Transport Nodes
A transport node in NSX is a node that is capable of participating in an NSX data plane. The workload domains contain multiple ESXi hosts in a vSphere cluster to support management or customer workloads. You register these ESXi hosts as transport nodes so that networks and workloads on that host can use the capabilities of NSX. During the preparation process, the native vSphere Distributed Switch for the workload domain is extended with NSX capabilities.
Virtual Segments
Geneve provides the overlay capability to create isolated, multi-tenant broadcast domains in NSX across data center fabrics, and enables customers to create elastic, logical networks that span physical network boundaries, and physical locations.
Transport Zones
A transport zone identifies the type of traffic, VLAN or overlay, and the vSphere Distributed Switch name. You can configure one or more VLAN transport zones and a single overlay transport zone per virtual switch. A transport zone does not represent a security boundary. VMware Cloud Foundation supports a single overlay transport zone per NSX Instance. All vSphere clusters, within and across workload domains that share the same NSX instance subsequently share the same overlay transport zone.
Uplink Policy for ESXi Host Transport Nodes
Uplink profiles define policies for the links from ESXi hosts to NSX segments or from NSX Edge appliances to top of rack switches. By using uplink profiles, you can apply consistent configuration of capabilities for network adapters across multiple ESXi hosts or NSX Edge nodes.
Uplink profiles can use either load balance source or failover order teaming. If using load balance source, multiple uplinks can be active. If using failover order, only a single uplink can be active.
Replication Mode of Segments
The control plane decouples NSX from the physical network. The control plane handles the broadcast, unknown unicast, and multicast (BUM) traffic in the virtual segments.
The following options are available for BUM replication on segments.
BUM Replication Mode |
Description |
---|---|
Hierarchical Two-Tier |
The ESXi host transport nodes are grouped according to their TEP IP subnet. One ESXi host in each subnet is responsible for replication to an ESXi host in another subnet. The receiving ESXi host replicates the traffic to the ESXi hosts in its local subnet. The source ESXi host transport node knows about the groups based on information it has received from the control plane. The system can select an arbitrary ESXi host transport node as the mediator for the source subnet if the remote mediator ESXi host node is available. |
Head-End |
The ESXi host transport node at the origin of the frame to be flooded on a segment sends a copy to every other ESXi host transport node that is connected to this segment. |
Overlay Design Requirements and Recommendations for VMware Cloud Foundation
Consider the requirements for the configuration of the ESXi hosts in a workload domain as NSX transport nodes, transport zone layout, uplink teaming policies, and the best practices for IP allocation and BUM replication mode in a VMware Cloud Foundation deployment.
Overlay Design Requirements
You must meet the following design requirements in your overlay design.
Requirement ID |
Design Requirement |
Justification |
Implication |
---|---|---|---|
VCF-NSX-OVERLAY-REQD-CFG-001 |
Configure all ESXi hosts in the workload domain as transport nodes in NSX. |
Enables distributed routing, logical segments, and distributed firewall. |
None. |
VCF-NSX-OVERLAY-REQD-CFG-002 |
Configure each ESXi host as a transport node without using transport node profiles. |
|
You must configure each transport node with an uplink profile individually. |
VCF-NSX-OVERLAY-REQD-CFG-003 |
To provide virtualized network capabilities to workloads, use overlay networks with NSX Edge nodes and distributed routing. |
|
Requires configuring transport networks with an MTU size of at least 1,600 bytes. |
VCF-NSX-OVERLAY-REQD-CFG-004 |
Create a single overlay transport zone for all overlay traffic across the workload domain and NSX Edge nodes. |
|
All clusters in all workload domains that share the same NSX Manager share the same transport zone. |
VCF-NSX-OVERLAY-REQD-CFG-005 |
Create a VLAN transport zone for uplink VLAN traffic that is applied only to NSX Edge nodes. |
Ensures that uplink VLAN segments are configured on the NSX Edge transport nodes. |
If VLAN segments are required on hosts, you must create another VLAN transport zone for the host transport nodes only. |
VCF-NSX-OVERLAY-REQD-CFG-006 |
Create an uplink profile with a load balance source teaming policy with two active uplinks for ESXi hosts. |
For increased resiliency and performance, supports the concurrent use of both physical NICs on the ESXi hosts that are configured as transport nodes. |
None. |
Overlay Design Recommendations
In your overlay design for VMware Cloud Foundation, you can apply certain best practices.
Recommendation ID |
Design Recommendation |
Justification |
Implication |
---|---|---|---|
VCF-NSX-OVERLAY-RCMD-CFG-001 |
Use DHCP to assign IP addresses to the host TEP interfaces. |
Required for deployments where a cluster spans Layer 3 network domains such as multiple availability zones and management clusters that span Layer 3 domains. |
DHCP server is required for the host overlay VLANs. |
VCF-NSX-OVERLAY-RCMD-CFG-002 |
Use hierarchical two-tier replication on all overlay segments. |
Hierarchical two-tier replication is more efficient because it reduced the number of ESXi hosts the source ESXi host must replicate traffic to if hosts have different TEP subnets. This is typically the case with more than one cluster and will improve performance in that scenario. |
None. |