You can add an NSX-T Edge cluster with 2-tier routing to the management domain or a workload domain to provide north-south routing and network services.

SDDC Manager does not enforce rack failure resiliency for Edge clusters. Make sure that the number of Edge nodes that you add to an NSX-T Edge cluster, and the vSphere clusters to which you deploy the Edge nodes, enable the Edge cluster to continue to provide Edge routing services in case of rack failure.

After you create an NSX-T Edge cluster, SDDC Manager does not support expanding or shrinking it by adding or deleting Edge nodes.

This procedure describes how to use SDDC Manager to create an NSX-T Edge cluster with NSX-T Edge node virtual appliances. If you have latency intensive applications in your environment, you can deploy NSX Edge nodes on bare-metal servers. See Deployment of VMware NSX-T Edge Nodes on Bare-Metal Hardware for VMware Cloud Foundation 4.0.x.

Prerequisites

  • Separate VLANs and subnets are available for NSX-T Host Overlay (Host TEP) VLAN and NSX-T Edge Overlay (Edge TEP) VLAN. A DHCP server must be configured on the NSX-T Host Overlay (Host TEP) VLAN. You cannot use DHCP for the NSX-T Edge Overlay (Edge TEP) VLAN.
  • NSX-T Host Overlay (Host TEP) VLAN and NSX-T Edge Overlay (Edge TEP) VLAN are routed to each other.
  • For dynamic routing, set up two Border Gateway Protocol (BGP) Peers on Top of Rack (ToR) switches with an interface IP, BGP autonomous system number (ASN), and BGP password.
  • Reserve a BGP ASN to use for the NSX-T Edge cluster’s Tier-0 gateway.
  • DNS entries for the NSX-T Edge nodes are populated in the customer-managed DNS server.
  • The vSphere cluster hosting an NSX-T Edge cluster must include hosts with identical management, uplink, host TEP, and Edge TEP networks (L2 uniform).
  • You cannot deploy an Edge cluster on a vSphere cluster that is stretched. You can stretch an L2 uniform vSphere cluster that hosts an Edge cluster.
  • The management network and management network gateway for the Edge nodes must be reachable.
  • In Cloud Foundation 4.0, Workload Management supports one Tier-0 gateway per transport zone. When creating an Edge cluster for Workload Management, ensure that its overlay transport zone does not have other Edge clusters (with Tier-0 gateways) connected to it. Starting from Cloud Foundation 4.0.1, this limitation has been removed.

Procedure

  1. On the SDDC Manager Dashboard, click Inventory > Workload Domains.
  2. In the Virtual Infrastructure (VI) page, click a domain name in the Domain column.
  3. Select Actions > Add Edge Cluster.
  4. Verify the prerequisites, select Select All, and click Begin.
  5. Enter information for the NSX-T Edge cluster and click Next.
    Setting Description
    Edge Cluster Name Enter a name for the Edge cluster.
    MTU Enter the MTU for the Edge cluster. The MTU can be 1600-9000.
    ASN Enter the BGP ASN for the Edge cluster.
    Tier 0 Name Enter a name for the tier-0 gateway.
    Tier 1 name Enter a name for the tier-1 gateway.
    Edge Cluster Profile Type Select Default or, if your environment requires specific Bidirectional Forwarding Detection (BFD) configuration, select Custom.
    Edge Cluster Profile Name Enter an NSX Edge cluster profile name. (Custom Edge cluster profile only)
    BFD Allowed Hop Enter the number of multi-hop Bidirectional Forwarding Detection (BFD) sessions allowed for the profile. (Custom Edge cluster profile only)
    BFD Declare Dead Multiple Enter the number of number of times the BFD packet is not received before the session is flagged as down. (Custom Edge cluster profile only)
    BFD Probe Interval (milliseconds) BFD is detection protocol used to identify the forwarding path failures. Enter a number to set the interval timing for BFD to detect a forwarding path failure. (Custom Edge cluster profile only)
    Standby Relocation Threshold (minutes) Enter a standby relocation threshold in minutes. (Custom Edge cluster profile only)
    Edge Root Password Enter and confirm a password.
    Edge Admin Password Enter and confirm a password.
    Edge Audit Password Enter and confirm a password.
    Edge cluster passwords must meet the following requirements:
    • At least 12 characters
    • At least one lower-case letter
    • At least one upper-case letter
    • At least one digit
    • At least one special character (!, @, ^, =, *, +)
    • At least five different characters
    • No dictionary words
    • No palindromes
    • More than four monotonic character sequence is not allowed
  6. Specify the use case details and click Next.
    Setting Description
    Use Case Select Workload Management to create an Edge cluster that complies with the requirements for running Workload Management. See Working with Workload Management. If you select this option, you cannot modify the Edge form factor or Tier-0 service high availability settings. Select Custom if you want to modify those settings.
    Edge Form Factor The default setting is Large.
    • Small: 4 GB memory, 2 vCPU, 200 GB disk space. The NSX Edge Small VM appliance size is suitable for lab and proof-of-concept deployments.
    • Medium: 8 GB memory, 4 vCPU, 200 GB disk space. The NSX Edge Medium appliance size is suitable for a typical production environment.
    • Large: 32 GB memory, 8 vCPU, 200 GB disk space. The NSX Edge Large appliance size is suitable for environments with load balancing.

    Workload management requires Large.

    Tier-0 Service High Availability In the active-active mode, traffic is load balanced across all members. In active-standby mode, all traffic is processed by an elected active member. If the active member fails, another member is elected to be active. Workload Management requires Active-Active. Some services are only supported in Active-Standby: NAT, load balancing, stateful firewall, and VPN. If you select Active-Standby, use exactly two Edge nodes in the Edge cluster.
    Tier-0 Routing Type Select Static or EBGP to determine the route distribution mechanism for the tier-0 gateway. If you select Static, you must manually configure the required static routes in NSX Manager. If you select EBGP, Cloud Foundation configures eBGP settings to allow dynamic route distribution.
  7. Enter the NSX-T Edge node details for the first node and click Add Edge Node.
    Setting Description
    Edge Node Name (FQDN) Enter the FQDN for the Edge node. Each node must have a unique FQDN.
    Management IP (CIDR) Enter the CIDR for the management network. Each node must have a unique management IP.
    Management Gateway Enter the IP address for the management network gateway.
    Edge TEP 1 IP (CIDR) Enter the CIDR for the first Edge TEP. Each node must have a unique Edge TEP 1 IP.
    Edge TEP 2 IP (CIDR) Enter the CIDR for the second Edge TEP. Each node must have a unique Edge TEP 2 IP. The Edge TEP 2 IP must be different than the Edge TEP 1 IP.
    Edge TEP Gateway Enter the IP address for the Edge TEP gateway.
    Edge TEP VLAN Enter the Edge TEP VLAN ID.
    Cluster Select a vSphere cluster to host the Edge node.
    Cluster Type Select L2 Uniform if all hosts in the vSphere cluster have identical management, uplink, host TEP, and Edge TEP networks.

    Select L2 non-uniform and L3 if any of the hosts in the vSphere cluster have different networks.

    Important: Cloud Foundation does not support Edge cluster creation on L2 non-uniform and L3 vSphere clusters.
    First Uplink VLAN Enter the VLAN ID for the first uplink.

    This is a link from the NSX-T Edge node to the first uplink network.

    First Uplink Interface IP (CIDR) Enter the CIDR for the first uplink. Each node must have unique uplink interface IPs.
    Peer IP (CIDR) Enter the CIDR for the first uplink peer. (EBGP only)
    ASN Peer Enter the ASN for the first uplink peer. (EBGP only)
    BGP Peer Password Enter and confirm the BGP password. (EBGP only). A BGP password is required.
    Second Uplink VLAN Enter the VLAN ID for the second uplink.

    This is a link from the NSX-T Edge node to the second uplink network.

    Second Uplink Interface IP(CIDR) Enter the CIDR for the second uplink. Each node must have unique uplink interface IPs. The second uplink interface IP must be different than the first uplink interface IP.
    Peer IP (CIDR) Enter the CIDR for the second uplink peer. (EBGP only)
    ASN Peer Enter the ASN for the second uplink peer. (EBGP only)
    BGP Peer Password Enter and confirm the BGP password. (EBGP only). A BGP password is required.
  8. Click Add More Edge Nodes and enter the Edge node details.
    A minimum of two NSX-T Edge nodes is required. You can have up to 10 NSX-T Edge nodes per Edge cluster.
  9. When you are done adding NSX-T Edge nodes, click Next.
  10. Review the summary and click Next.
    SDDC Manager validates the NSX-T Edge node information.
  11. If validation fails, use the Back button to edit your settings and try again.
    To edit or delete any of the Edge nodes, click the three vertical dots next to an Edge node in the table and select an option from the menu.
  12. If validation succeeds, click Finish to create the NSX Edge cluster.
    You can monitor progress in the Tasks panel.

Example

The following examples show two scenarios with sample data. You can use these examples to guide you in creating NSX-T Edge clusters in your environment.
Figure 1. Two-node NSX-T Edge cluster in a single rack

Two-node NSX-T Edge cluster in a single rack
Figure 2. Four-node NSX-T Edge cluster spanning multiple racks

Four-node NSX-T Edge cluster spanning multiple racks

What to do next

In NSX Manager, you can create segments connected to the NSX-T Edge cluster's tier-1 gateway. You can connect workload VMs to these segments to provide north-south and east-west connectivity.