The NSX Edge cluster runs on the default management vSphere cluster of VMware Cloud Foundation. vSphere HA and vSphere DRS protect the NSX Edge appliances. In an environment with multiple availability zones, to configure the first availability zone as the main location for the NSX Edge nodes, you use vSphere DRS.
NSX Edge Cluster Design
The NSX Edge cluster is a logical grouping of NSX Edge transport nodes. These NSX Edge appliances run on a vSphere cluster, and provide north-south routing and network services for the management workloads. You can dedicate this vSphere cluster only to edge appliance or can share it with the other management appliances.
- Default management vSphere cluster
-
The default management vSphere cluster contains all components for managing VMware Cloud Foundation. See the vSphere Cluster Design for the Management Domain.
- Dedicated edge vSphere cluster
-
A dedicated edge vSphere cluster contains only NSX Edge appliances for the management domain.
Decision ID |
Design Decision |
Design Justification |
Design Implications |
---|---|---|---|
VCF-MGMT-NSX-EDGE-CFG-002 |
Deploy the NSX Edge virtual appliances in the default management vSphere cluster, sharing the cluster between the management workloads and the edge appliances. |
|
None. |
VCF-MGMT-NSX-EDGE-CFG-003 |
Deploy two NSX Edge appliances in an edge cluster in the default vSphere cluster in the management domain. |
Creates the NSX Edge cluster for satisfying the requirements for availability and scale. |
None. |
VCF-MGMT-NSX-EDGE-CFG-004 |
Apply VM-VM anti-affinity rules for vSphere DRS to the virtual machines of the NSX Edge cluster. |
Keeps the NSX Edge nodes running on different ESXi hosts for high availability. |
None. |
VCF-MGMT-NSX-EDGE-CFG-005 |
In vSphere HA, set the restart priority policy for each NSX Edge appliance to high. |
|
If the restart priority for another management appliance is set to highest, the connectivity delays for management appliances will be longer . |
VCF-MGMT-NSX-EDGE-CFG-006 |
Configure all edge nodes as transport nodes. |
Enables the participation of edge nodes in the overlay network for delivery of services to the SDDC management components such as routing and load balancing. |
None. |
VCF-MGMT-NSX-EDGE-CFG-007 |
Create an NSX Edge cluster with the default Bidirectional Forwarding Detection (BFD) configuration between the NSX Edge nodes in the cluster. |
|
None. |
High Availability for a Single VMware Cloud Foundation Instance with Multiple Availability Zones
NSX Edge nodes connect to top of rack switches in each data center to support northbound uplinks and route peering for SDN network advertisement. This connection is specific to the top of rack switch that you are connected to.
If an outage of an availability zone occurs, vSphere HA fails over the edge appliances to the other availability zone. The second availability zone must provide an analog of the network infrastructure which the edge node is connected to in the first availability zone.
To support failover of the NSX Edge appliances, the following networks are stretched across the first and second availability zones. For information about all networks in a management domain with multiple availability zones, see Physical Network Infrastructure Design for NSX-T Data Center for the Management Domain.
Function |
HA Layer 3 Gateway - Across Availability Zones |
---|---|
Management for the first availability zone |
✓ |
Uplink01 |
x |
Uplink02 |
x |
Edge overlay |
✓ |
The VLAN ID and Layer 3 network must be the same across both the availability zones. Additionally, the Layer 3 gateway at the first hop must be highly available such that it tolerates the failure of an entire availability zone.
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-EDGE-CFG-008 |
Add the NSX Edge appliances to the virtual machine group for the first availability zone. |
Ensures that, by default, the NSX Edge appliances are powered on upon a host in the primary availability zone. |
None. |
High Availability for Multiple VMware Cloud Foundation Instances
In an environment with multiple VMware Cloud Foundation instances, each instance has its own NSX Edge cluster for the management domain. In each instance, the edge nodes and cluster are deployed with the same design but with instance-specific settings such as IP addressing, VLAN IDs, and names. Each edge cluster is managed by the NSX Local Manager instance for the management domain.
Workload traffic between VMware Cloud Foundation instances traverses the inter-instance overlay tunnel which terminates on the RTEPs on the NSX Edge nodes. This tunnel is the data plane for inter-instance traffic.
Take into account the following considerations:
The RTEP network segment has VLAN ID and Layer 3 range that are specific to the individual data center fault domain.
If a VMware Cloud Foundation instance is deployed with multiple availability zones, the RTEP network segment must be stretched between the zones with the same VLAN ID and IP range. Additionally, the Layer 3 gateway at the first hop must be highly available such that it tolerates the failure of an entire availability zone.
-
In case of multiple VMware Cloud Foundation instances, each instance requires an Edge RTEP VLAN configured with a VLAN ID and IP range that are appropriate.
Function |
First Availability Zone |
Second Availability Zone |
High Availability Layer 3 Gateway |
---|---|---|---|
Edge RTEP in the first VMware Cloud Foundation instance |
✓ |
✓ |
✓ |
Edge RTEP in the second VMware Cloud Foundation instance |
✓ |
✓ |
✓ |