An uplink is a link from the NSX Edge nodes to the top-of-rack switches or NSX-T Data Center logical switches. A link is from a physical network interface on an NSX Edge node to a switch.

An uplink profile defines policies for the uplinks. The settings defined by uplink profiles can include teaming policies, active and standby links, transport VLAN ID, and MTU setting.

Configuring uplinks for VM appliance-based NSX Edge nodes and Bare Metal NSX Edge transport nodes:
  • If the Failover teaming policy is configured for an uplink profile, then you can only configure a single active uplink in the teaming policy. Standby uplinks are not supported and must not be configured in the failover teaming policy. If the teaming policy uses more than one uplink (active/standby list), you cannot use the same uplinks in the same or a different uplink profile for a given NSX Edge transport node.
  • If the Load Balanced Source teaming policy is configured for an uplink profile, then you can either configure uplinks associated to different physical NICs or configure an uplink mapped to a LAG that has two physical NICs on the same N-VDS. The IP address assigned to an uplink endpoint is configurable using IP Assignment for the N-VDS. The number of LAGs that you can actually use depends on the capabilities of the underlying physical environment and the topology of the virtual network. For example, if the physical switch supports up to four ports in an LACP port channel, you can connect up to four physical NICs per host to a LAG.

You must use the Load Balanced Source teaming policy for traffic load balancing.

Prerequisites

  • See NSX Edge network requirements in NSX Edge Installation Requirements.
  • Each uplink in the uplink profile must correspond to an up and available physical link on your hypervisor host or on the NSX Edge node.

    For example, your hypervisor host has two physical links that are up: vmnic0 and vmnic1. Suppose vmnic0 is used for management and storage networks, while vmnic1 is unused. This might mean that vmnic1 can be used as an NSX-T Data Center uplink, but vmnic0 cannot. To do link teaming, you must have two unused physical links available, such as vmnic1 and vmnic2.

    For an NSX Edge, tunnel endpoint and VLAN uplinks can use the same physical link. For example, vmnic0/eth0/em0 might be used for your management network and vmnic1/eth1/em1 might be used for your fp-ethX links.

Procedure

  1. From a browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-ip-address>.
  2. Select System > Fabric > Profiles > Uplink Profiles > Add Profile.
  3. Complete the uplink profile details.
    Option Description
    Name and Description Enter an uplink profile name.

    Add an optional uplink profile description.

    LAGs (Optional) In the LAGs section, click Add for Link aggregation groups (LAGs) using Link Aggregation Control Protocol (LACP) for the transport network.
    Note: You can only configure one LAG on a KVM host. During configuration, you map NSX-T Data Center to a Linux Bond that can have many physical NICs as supported by the underlying Linux OS. In this case, NSX-T Data Center is not doing the LAG but sees the LAG interface as a single interface for the OVS.

    The active and standby uplink names you create can be any text to represent physical links. These uplink names are referenced later when you create transport nodes. The transport node UI/API allows you to specify which physical link corresponds to each named uplink.

    Possible LAG hashing mechanism options:

    • Source MAC address
    • Destination MAC address
    • Source and destination MAC address
    • Source and destination IP address and VLAN
    • Source and destination MAC address, IP address, and TCP/UDP port

    Supported LAG hashing mechanisms on hosts types:

    • NSX Edge nodes: Source and destination MAC address, IP address, and TCP/UDP port.
    • KVM hosts: Source MAC address.
    • ESXi hosts with N-VDS in Enhanced Networking Stack (ENS) mode: Source MAC address, Destination MAC address, and Source and destination MAC address.

    • ESXi hosts with N-VDS in Standard mode: Source MAC address, Destination MAC address, Source and destination MAC address, and Source and destination IP address and VLAN.

    • ESXi hosts with vSphere Distributed Switch (v 7.0 and later that supports NSX-T Data Center): LACP is not configured in NSX-T Data Center. You need to configure it in vCenter Server.

    Teamings
    In the Teaming section, you can either enter a default teaming policy or you can choose to enter a named teaming policy. Click Add to add a naming teaming policy. A teaming policy defines how N-VDS uses its uplink for redundancy and traffic load balancing. You can configure a teaming policy in the following modes:
    • Failover Order: Select an active uplink is specified along with an optional list of standby uplinks. If the active uplink fails, the next uplink in the standby list replaces the active uplink. No actual load balancing is performed with this option. If the teaming policy uses more than one uplink (active/standby list), you cannot use the same uplinks in the same or a different uplink profile for a given NSX Edge transport node. For example, in Uplink-Profile-1 you use Uplink-3 as an active uplink and Uplink-4 as a standby link, you cannot use these two uplinks in the same or a different uplink profile on the NSX Edge transport node. However, in Uplink-Profile-1 if you use Uplink-3 as an active uplink but don't use any uplink as a standby uplink, you can use Uplink-3 in another teaming policy.
    • Load Balance Source: Select a list of active uplinks. When you configure a transport node, you can pin each interface of the transport node to one active uplink. This configuration allows use of several active uplinks at the same time.
    • Load Balance Source MAC Address: Select an uplink based on a hash of the source Ethernet.
    Note:
    • On hypervisor hosts:
      • KVM hosts: Only Failover Order teaming policy is supported, whereas Load Balance Source and Load Balance Source MAC teaming policies are not supported.
      • ESXi hosts: Load Balance Source MAC, Load Balance Source, and Failover Order teaming policies are supported.
    • On NSX Edge: For default teaming policy, Load Balance Source and Failover Order teaming policies are supported. For named teaming policy, only Failover Order policy is supported.
      Important: To manage VLAN traffic, if you configure a default teaming policy in Load Balance Source mode, then on failure of the first uplink, traffic will not fail over to the second uplink interface.
    ( ESXi hosts and NSX Edge) You can define the following policies for a transport zone:
    • A Named teaming policy for every VLAN-based logical switch or segment.
    • A Default teaming policy for the entire N-VDS.

    Named teaming policy: A named teaming policy means that for every VLAN-based logical switch or segment, you can define a specific teaming policy mode and uplinks names. This policy type gives you the flexibility to select specific uplinks depending on the traffic steering policy, for example, based on bandwidth requirement.

    • If you define a named teaming policy, N-VDS uses that named teaming policy if it is attached to the VLAN-based transport zone and finally selected for specific VLAN-based logical switch or segment in the host.
    • If you do not define any named teaming policies, N-VDS uses the default teaming policy.
  4. Enter a Transport VLAN value. The transport VLAN set in the uplink profile tags overlay traffic only and the VLAN ID is used by the TEP endpoint.
  5. Enter the MTU value.

    The uplink profile MTU default value is 1600.

    The global physical uplink MTU configures the MTU value for all the N-VDS instances in the NSX-T Data Center domain. If the global physical uplink MTU value is not specified, the MTU value is inferred from the uplink profile MTU if configured or the default 1600 is used. The uplink profile MTU value can override the global physical uplink MTU value on a specific host.

    The global logical interface MTU configures the MTU value for all the logical router interfaces. If the global logical interface MTU value is not specified, the MTU value is inferred from the tier-0 logical router. The logical router uplink MTU value can override on a specific port the global logical interface MTU value.

Results

In addition to the UI, you can also view the uplink profiles with the API call GET /api/v1/host-switch-profiles.

What to do next

Create a transport zone. See Create Transport Zones.