NSX Edge can be installed using ISO, OVA/OVF, or PXE start. Regardless of the installation method, make sure that the host networking is prepared before you install NSX Edge.

High-Level View of NSX Edge Within a Transport Zone

NSX Edge nodes are service appliances with pools of capacity, dedicated to running network services that cannot be distributed to the hypervisors. Edge nodes can be viewed as empty containers when they are first deployed.

Figure 1. High-Level Overview of NSX Edge

An NSX Edge node is the appliance that provides physical NICs to connect to the physical infrastructure. These features include:

  • Connectivity to physical infrastructure

  • NAT

  • DHCP server

  • Metadata proxy

  • Edge firewall

When one of these services is configured or an uplink is defined on the logical router to connect to the physical infrastructure, a SR is instantiated on the NSX Edge node. The NSX Edge node is also a transport node just like compute nodes in NSX-T Data Center, and similar to compute node the NSX Edge can connect to more than one transport zone – one for overlay and other for North-South peering with external devices. There are two transport zones on the NSX Edge:

Overlay Transport Zone - Any traffic that originates from a VM participating in NSX-T Data Center domain might require reachability to external devices or networks. This is typically described as external north-south traffic. The NSX Edge node is responsible for decapsulating the overlay traffic received from compute nodes as well as encapsulating the traffic sent to compute nodes.

VLAN Transport Zone - In addition to the encapsulate or decapsulate traffic function, NSX Edge nodes also need a VLAN transport zone to provide uplink connectivity to the physical infrastructure.

By default, the links between the SR and the DR use the 169.254.0.0/28 subnet. These intra-router transit links are created automatically when you deploy a tier-0 or tier-1 logical router. You do not need to configure or modify the link configuration unless the 169.254.0.0/28 subnet is already in use in your deployment. On a tier-1 logical router, the SR is present only if you select an NSX Edge when creating the tier-1 logical router.

The default address space assigned for the tier-0-to-tier-1 connections is 100.64.0.0/10. Each tier-0-to-tier-1 peer connection is provided a /31 subnet within the 100.64.0.0/10 address space. This link is created automatically when you create a tier-1 router and connect it to a tier-0 router. You do not need to configure or modify the interfaces on this link unless the 100.64.0.0/10 subnet is already in use in your deployment.

Each NSX-T Data Center deployment has a management plane cluster (MP) and a control plane cluster (CCP). The MP and the CCP push configurations to each transport zone's local control plane (LCP). When a host or NSX Edge joins the management plane, the management plane agent (MPA) establishes connectivity with the host or NSX Edge, and the host or NSX Edge becomes an NSX-T Data Center fabric node. When the fabric node is then added as a transport node, LCP connectivity is established with the host or NSX Edge.

The High-Level Overview of NSX Edge figure shows an example of two physical NICs (pNIC1 and pNIC2) that are bonded to provide high availability. The datapath manages the physical NICs. They can serve as either VLAN uplinks to an external network or as tunnel endpoint links to internal NSX-T Data Center-managed VM networks.

The best practice is to allocate at least two physical links to each NSX Edge that is deployed as a VM. Optionally, you can overlap the port groups on the same pNIC using different VLAN IDs. The first network link found is used for management. For example, on an NSX Edge VM, the first link found might be vnic1.

On a bare metal installation, the first link found might be eth0 or em0. The remaining links are used for the uplinks and tunnels. For example, one might be for a tunnel endpoint used byNSX-T Data Center-managed VMs. The other might be used for an NSX Edge-to-external TOR uplink.

You can view the physical link information of the NSX Edge, log in to the CLI as an administrator and run the get interfaces and get physical-ports commands. In the API, you can use the GET fabric/nodes/<edge-node-id>/network/interfaces API call.

Whether you install NSX Edge as a VM appliance or on bare metal, you have multiple options for the network configuration, depending on your deployment.

Transport Zones and N-VDS

Transport zones control the reach of Layer 2 networks in NSX-T Data Center. N-VDS is a software switch that gets created on a transport node. The primary component involved in the data plane of the transport nodes is the N-VDS. The N-VDS forwards traffic between components running on the transport node for example, between virtual machines or between internal components and the physical network. In the latter case, the N-VDS must own one or more physical interfaces (pNICs) on the transport node. As with other virtual switches, an N-VDS cannot share a physical interface with another N-VDS. It might coexist with another N-VDS when using a separate set of pNICs.

There are two types of transport zones:

  • Overlay for internal NSX-T Data Center tunneling between transport nodes.

  • VLAN for uplinks external to NSX-T Data Center.

You might do this if you want each NSX Edge to have only one N-VDS. Another design option is for the NSX Edge to belong to multiple VLAN transport zones, one for each uplink.

The most common design choice is three transport zones: One overlay and two VLAN transport zones for redundant uplinks.

For more information about transport zones, see About Transport Zones.

Virtual-Appliance/VM NSX Edge Networking

An NSX Edge VM has four internal interfaces: eth0, fp-eth0, fp-eth1, and fp-eth2. Eth0 is reserved for management, while the rest of the interfaces are assigned to DPDK fastpath. These interfaces are allocated for uplinks to TOR switches and for NSX-T Data Center overlay tunneling. The interface assignment is flexible for either uplink or overlay. As an example, fp-eth0 could be assigned for overlay traffic with fp-eth1, fp-eth2, or both for uplink traffic.

On the vSphere distributed switch or vSphere Standard switch, you must allocate at least two vmnics to the NSX Edge for redundancy.

In the following sample physical topology, eth0 is used for management network, fp-eth0 is used for the NSX-T Data Center overlay traffic, fp-eth1 is used for the VLAN uplink and fp-eth2 is not used. If fp-eth2 is not used, you must disconnect it.

Figure 2. One Suggested Link Setup for NSX Edge VM Networking

The NSX Edge shown in this example belongs to two transport zones (one overlay and one VLAN) and therefore has two N-VDS, one for tunnel and one for uplink traffic.

This screenshot shows the virtual machine port groups, nsx-tunnel, and vlan-uplink.

During deployment, you must specify the network names that match the names configured on your VM port groups. For example, to match the VM port groups in the example, your network ovftool settings can be as follows if you were using the ovftool to deploy NSX Edge:

--net:"Network 0-Mgmt" --net:"Network 1-nsx-tunnel" --net:"Network 2=vlan-uplink"

The example shown here uses the VM port group names Mgmt, nsx-tunnel, and vlan-uplink. You can use any names for your VM port groups.

For example, on a standard vSwitch, you configure trunk ports as follows: . Host > Configuration > Networking > Add Networking > Virtual Machine > VLAN ID All (4095).

NSX Edge VM can be installed on vSphere distributed switch or vSphere Standard switches.

NSX Edge VM can be installed on an NSX-T Data Center prepared host and configured as a transport node. There are two types of deployment:

  • NSX Edge VM can be deployed using VSS/VDS port groups where VSS/VDS consume separate pNIC(s) on the host. Host transport node consumes separate pNIC(s) for N-VDS installed on the host. N-VDS of the host transport node co-exists with a VSS or VDS, both consuming separate pNICs. Host TEP (Tunnel End Point) and NSX Edge TEP can be in the same or different subnets.

  • NSX Edge VM can be deployed using VLAN-backed logical switches on the N-VDS of the host transport node. Host TEP and NSX Edge TEP must be in different subnets.

Multiple NSX Edge VMs can be installed on a single host, leveraging the same management, VLAN, and overlay port groups.

For an NSX Edge VM deployed on an ESXi host that has the vSphere and not N-VDS, you must do the following:

  • Enable forged transmit for DHCP server running on this NSX Edge.

  • Enable promiscuous mode for the NSX Edge VM to receive unknown unicast packets because MAC learning is disabled by default. This is not necessary for vDS 6.6 or later, which has MAC learning enabled by default.

Bare-Metal NSX Edge Networking

NSX-T Data Center bare metal NSX Edge runs on a physical server and is installed using an ISO file or PXE boot. The bare metal NSX Edge is recommended for production environments where services like NAT, firewall, and load balancer are needed in addition to Layer 3 unicast forwarding. A bare metal NSX Edge differs from the VM form factor NSX Edge in terms of performance. It provides sub-second convergence, faster failover, and higher throughput.

When a bare metal NSX Edge node is installed, a dedicated interface is retained for management. If redundancy is desired, two NICs can be used for management plane high availability. These management interfaces can also be 1G.

Bare metal NSX Edge node supports a maximum of 8 physical NICs for overlay traffic and uplink traffic to top of rack (TOR) switches. For each of these 8 physical NICs on the server, an internal interface is created following the naming scheme "fp-ethX". These internal interfaces are assigned to the DPDK fastpath. There is complete flexibility in assigning fp-eth interfaces for overlay or uplink connectivity.

In the following sample physical topology, fp-eth0 and fp-eth1 are bonded and used for the NSX-T Data Center overlay tunnel. fp-eth2 and fp-eth3 are used as redundant VLAN uplinks to TORs.

Figure 3. One Suggested Link Setup for Bare-Metal NSX Edge Networking