Following the principles of this design and of each product, you deploy, configure, and connect the NSX Edge nodes to support networks in the NSX instances in your VMware Cloud Foundation deployment.

Deployment Model for the NSX Edge Nodes for VMware Cloud Foundation

For NSX Edge nodes, you determine the form factor, number of nodes and placement according to the requirements for network services in a VMware Cloud Foundation workload domain.

An NSX Edge node is an appliance that provides centralized networking services which cannot be distributed to hypervisors, such as load balancing, NAT, VPN, and physical network uplinks. Some services, such as Tier-0 gateways, are limited to a single instance per NSX Edge node. However, most services can coexist in these nodes.

NSX Edge nodes are grouped in one or more edge clusters, representing a pool of capacity for NSX services.

An NSX Edge node can be deployed as a virtual appliance, or installed on bare-metal hardware. The edge node on bare-metal hardware can have better performance capabilities at the expense of more difficult deployment and limited deployment topology use cases. For details on the trade-offs of using virtual or bare-metal NSX Edges, see the NSX documentation.

Table 1. NSX Edge Deployment Model Considerations

Deployment Model

Benefits

Drawbacks

NSX Edge virtual appliance deployed by using SDDC Manager

  • Deployment and life cycle management by using SDDC Manager workflows that call NSX Manager

  • Automated password management by using SDDC Manager

  • Benefits from vSphere HA recovery

  • Can be used across availability zones

  • Easy to scale up by modifying the specification of the virtual appliance

  • Might not provide best performance in individual customer scenarios

NSX Edge virtual appliance deployed by using NSX Manager

  • Benefits from vSphere HA recovery

  • Can be used across availability zones

  • Easy to scale up by modifying the specification of the virtual appliance

  • Might not provide best performance in individual customer scenarios

  • Manually deployed by using NSX Manager

  • Manual password Management by using NSX Manager

  • Cannot be used to support Application Virtual Networks (AVNs) in the management domain

Bare-metal NSX Edge appliance

  • Might provide better performance in individual customer scenarios

  • Has hardware compatibility requirements

  • Requires individual hardware life cycle management and monitoring of failures, firmware and drivers

  • Manual password management

  • Must be manually deployed and connected to the environment

  • Requires manual recovery after hardware failure

  • Requires deploying a bare-metal NSX Edge appliance in each availability zone for network failover

  • Deploying a bare-metal edge in each availability zone requires considering asymmetric routing
  • Requires edge fault domains if more than one edge is deployed in each availability zone for Active/Standby Tier-0/Tier-1 gateways
  • Requires redeployment to new host to achieve scale-up

  • Cannot be used to support AVNs in the management domain

Sizing Considerations for NSX Edges for VMware Cloud Foundation

When you deploy NSX Edge appliances, you select a size according to the scale of your environment. The option that you select determines the number of CPUs and the amount of memory of the appliance.

For detailed sizing according to the overall profile of the VMware Cloud Foundation instance you plan to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.

Table 2. Sizing Considerations for NSX Edges

NSX Edge Appliance Size

Scale

Small

Proof of concept

Medium

Suitable when only Layer 2 through Layer 4 features such as NAT, routing, Layer 4 firewall, Layer 4 load balancer are required and the total throughput requirement is less than 2 Gbps.

Large

Suitable when only Layer 2 through Layer 4 features such as NAT, routing, Layer 4 firewall, Layer 4 load balancer are required and the total throughput is 2 ~ 10 Gbps. It is also suitable when Layer 7 load balancer, for example, SSL offload is required.

Extra Large

Suitable when the total throughput required is multiple Gbps for Layer 7 load balancer and VPN.

Network Design for the NSX Edge Nodes for VMware Cloud Foundation

In each VMware Cloud Foundation instance, you implement an NSX Edge configuration with a single N-VDS. You connect the uplink network interfaces of the edge appliance to VLAN trunk port groups that are connected to particular physical NICs on the host.

NSX Edge Network Configuration

The NSX Edge node contains a virtual switch, called an N-VDS, that is managed by NSX. This internal N-VDS is used to define traffic flow through the interfaces of the edge node. An N-VDS can be connected to one or more interfaces. Interfaces cannot be shared between N-VDS instances.

If you plan to deploy multiple VMware Cloud Foundation instances, apply the same network design to the NSX Edge cluster in the second and other additional VMware Cloud Foundation instances.

Figure 1. NSX Edge Network Configuration

The NSX Edge appliance is with a single N-VDS. eth0 is for management traffic, connected to the management port group. fp-eth0 and fp-eth1 are for uplink and overlay traffic, and are connected to the uplink port groups.

Uplink Policy Design for the NSX Edge Nodes for VMware Cloud Foundation

A transport node can participate in an overlay and VLAN network. Uplink profiles define policies for the links from the NSX Edge transport nodes to top of rack switches. Uplink profiles are containers for the properties or capabilities for the network adapters. Uplink profiles are applied to the N-VDS of the edge node.

Uplink profiles can use either load balance source or failover order teaming. If using load balance source, multiple uplinks can be active. If using failover order, only a single uplink can be active.

Teaming can be configured by using the default teaming policy or a user-defined named teaming policy. You can use named teaming policies to pin traffic segments to designated edge uplinks.

NSX Edge Node Requirements and Recommendations for VMware Cloud Foundation

Consider the network, N-VDS configuration and uplink policy requirements for using NSX Edge nodes in VMware Cloud Foundation, and the best practices for having NSX Edge nodes operate in an optimal way, such as number and size of the nodes, high availability, and N-VDS architecture, on a standard or stretched cluster.

NSX Edge Design Requirements

You must meet the following design requirements for standard and stretched clusters in your NSX Edge design for VMware Cloud Foundation.

Table 3. NSX Edge Design Requirements for VMware Cloud Foundation

Requirement ID

Design Requirement

Justification

Implication

VCF-NSX-EDGE-REQD-CFG-001

Connect the management interface of each NSX Edge node to the management VLAN.

Provides connection from the NSX Manager cluster to the NSX Edge.

None.

VCF-NSX-EDGE-REQD-CFG-002

  • Connect the fp-eth0 interface of each NSX Edge appliance to a VLAN trunk port group pinned to physical NIC 0 of the host, with the ability to failover to physical NIC 1.

  • Connect the fp-eth1 interface of each NSX Edge appliance to a VLAN trunk port group pinned to physical NIC 1 of the host, with the ability to failover to physical NIC 0.

  • Leave the fp-eth2 interface of each NSX Edge appliance unused.

  • Because VLAN trunk port groups pass traffic for all VLANs, VLAN tagging can occur in the NSX Edge node itself for easy post-deployment configuration.

  • By using two separate VLAN trunk port groups, you can direct traffic from the edge node to a particular host network interface and top of rack switch as needed.

  • In the event of failure of the top of rack switch, the VLAN trunk port group will failover to the other physical NIC and to ensure both fp-eth0 and fp-eth1 are available.

None.

VCF-NSX-EDGE-REQD-CFG-003

Use a dedicated VLAN for edge overlay that is different from the host overlay VLAN.

A dedicated edge overlay network provides support for edge mobility in support of advanced deployments such as multiple availability zones or multi-rack clusters.

  • You must have routing between the VLANs for edge overlay and host overlay.

  • You must allocate another VLAN in the data center infrastructure for edge overlay.

VCF-NSX-EDGE-REQD-CFG-004

Create one uplink profile for the edge nodes with three teaming policies.

  • Default teaming policy of load balance source with both active uplinks uplink1 and uplink2.

  • Named teaming policy of failover order with a single active uplink uplink1 without standby uplinks.

  • Named teaming policy of failover order with a single active uplink uplink2 without standby uplinks.

  • An NSX Edge node that uses a single N-VDS can have only one uplink profile.

  • For increased resiliency and performance, supports the concurrent use of both edge uplinks through both physical NICs on the ESXi hosts.

  • The default teaming policy increases overlay performance and availability by using multiple TEPs, and balancing of overlay traffic.

  • By using named teaming policies, you can connect an edge uplink to a specific host uplink and from there to a specific top of rack switch in the data center.

  • Enables ECMP because the NSX Edge nodes can uplink to the physical network over two different VLANs.

None.

Table 4. NSX Edge Design Requirements for NSX Federation in VMware Cloud Foundation

Requirement ID

Design Requirement

Justification

Implication

VCF-NSX-EDGE-REQD-CFG-005

Allocate a separate VLAN for edge RTEP overlay that is different from the edge overlay VLAN.

The RTEP network must be on a VLAN that is different from the edge overlay VLAN. This is an NSX requirement that provides support for configuring different MTU size per network.

You must allocate another VLAN in the data center infrastructure.

NSX Edge Design Recommendations

In your NSX Edge design for VMware Cloud Foundation, you can apply certain best practices for standard and stretched clusters.

Table 5. NSX Edge Design Recommendations for VMware Cloud Foundation

Recommendation ID

Design Recommendation

Justification

Implications

VCF-NSX-EDGE-RCMD-CFG-001

Use appropriately sized NSX Edge virtual appliances.

Ensures resource availability and usage efficiency per workload domain.

You must provide sufficient compute resources to support the chosen appliance size.

VCF-NSX-EDGE-RCMD-CFG-002

Deploy the NSX Edge virtual appliances to the default vSphere cluster of the workload domain, sharing the cluster between the workloads and the edge appliances.

  • Simplifies the configuration and minimizes the number of ESXi hosts required for initial deployment.

  • Keeps the management components located in the same domain and cluster, isolated from customer workloads.

Workloads and NSX Edges share the same compute resources.

VCF-NSX-EDGE-RCMD-CFG-003

Deploy two NSX Edge appliances in an edge cluster in the default vSphere cluster of the workload domain.

Creates the NSX Edge cluster for satisfying the requirements for availability and scale.

None.

VCF-NSX-EDGE-RCMD-CFG-004

Apply VM-VM anti-affinity rules for vSphere DRS to the virtual machines of the NSX Edge cluster.

Keeps the NSX Edge nodes running on different ESXi hosts for high availability.

None.

VCF-NSX-EDGE-RCMD-CFG-005

In vSphere HA, set the restart priority policy for each NSX Edge appliance to high.

  • The NSX Edge nodes are part of the north-south data path for overlay segments. vSphere HA restarts the NSX Edge appliances first so that other virtual machines that are being powered on or migrated by using vSphere vMotion while the edge nodes are offline lose connectivity only for a short time.

  • Setting the restart priority to high reserves highest for future needs.

If the restart priority for another management appliance is set to highest, the connectivity delays for management appliances will be longer .

VCF-NSX-EDGE-RCMD-CFG-006

Configure all edge nodes as transport nodes.

Enables the participation of edge nodes in the overlay network for delivery of services to the SDDC management components such as routing and load balancing.

None.

VCF-NSX-EDGE-RCMD-CFG-007

Create an NSX Edge cluster with the default Bidirectional Forwarding Detection (BFD) configuration between the NSX Edge nodes in the cluster.

  • Satisfies the availability requirements by default.

  • Edge nodes must remain available to create services such as NAT, routing to physical networks, and load balancing.

None.

VCF-NSX-EDGE-RCMD-CFG-008

Use a single N-VDS in the NSX Edge nodes.

  • Simplifies deployment of the edge nodes.

  • The same N-VDS switch design can be used regardless of edge form factor.

  • Supports multiple TEP interfaces in the edge node.

  • vSphere Distributed Switch is not supported in the edge node.

None.

Table 6. NSX Edge Design Recommendations for Stretched Clusters in VMware Cloud Foundation

Recommendation ID

Design Recommendation

Justification

Implications

VCF-NSX-EDGE-RCMD-CFG-009

Add the NSX Edge appliances to the virtual machine group for the first availability zone.

Ensures that, by default, the NSX Edge appliances are powered on upon a host in the primary availability zone.

None.