If a cluster of ESXi hosts is registered to a vCenter Server, you can apply transport node profiles on the ESXi cluster to automatically prepare all hosts as NSX-T Data Center transport nodes.

Prerequisites

  • Verify that all hosts in the vCenter Server are powered on.
  • Verify that the system requirements are met. See System Requirements.
  • The reverse proxy service on all nodes of the NSX Manager cluster must be Up and running.

    To verify, run get service http. If the service is down, restart the service by running restart service http on each NSX Manager node. If the service is still down, contact VMware support.

  • Verify that a transport zone is available. See Create Transport Zones.
  • Verify that a transport node profile is configured. See Add a Transport Node Profile.
  • (Host in lockdown mode) If your exception list for vSphere lockdown mode includes expired user accounts, NSX-T Data Center installation on vSphere fails. Ensure that you delete all expired user accounts before you begin installation. For more information on accounts with access privileges in lockdown mode, see Specifying Accounts with Access Privileges in Lockdown Mode in the vSphere Security Guide.

Procedure

  1. From a browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-ip-address>.
  2. (NSX-T Data Center 3.2.1 and previous versions) Select System → Fabric → Nodes > Host Transport Nodes.
  3. (NSX-T Data Center 3.2.1 and previous versions) From the Managed by dropdown field, select the vCenter Server.
  4. (NSX-T Data Center 3.2.2) Select System → Fabric → Hosts > Clusters.
  5. (NSX-T Data Center 3.2.2) To set Sub-clusters in the cluster:
    1. In a cluster, go to the Sub-cluster field, click Set.
    2. In the Sub-cluster window, click Add Sub-cluster.
    3. In the Sub-cluster field, enter a name for the Sub-cluster.
    4. In the Nodes field, click Set.
    5. In the Set Host Nodes window, select the nodes that must be part of the Sub-cluster and click Apply.
  6. Select a cluster and click Configure NSX.
  7. In the NSX Installation pop-up window, from the Transport Node Profile drop-down menu, select the transport node profile to apply to the cluster. If a transport node profile is not created, click Create New Transport Node Profile to create a new one. See Add a Transport Node Profile.
  8. (NSX-T Data Center 3.2.2) In the Sub-cluster Nodes tab, expand each Sub-cluster and assign each Sub-cluster with a Sub-TNP. The Sub-TNPs must have been defined in the TNP applied to the cluster. If no Sub-TNP is assigned to a Sub-cluster, then global TNP will be applied to that Sub-cluster. After applying the TNP, the Sub-TNP configurations will override the TNP configurations for the Sub-cluster.
  9. Click Save to begin the process of transport node creation of all hosts in the cluster.
  10. If you only want to prepare individual hosts as transport nodes, select a host, click Configure NSX.
    The Configure NSX dialog box opens.
    1. Verify the host name in the Host Details panel. Optionally, you can add a description.
    2. Click Next to move to the Configure NSX panel.
  11. In the Prepare Host page, click Add Host Switch.
  12. SelectIn the Configure NSX panel, expand New Node Switch.
  13. If you select NVDS as the host switch type, proceed to enter details for the fields described in this step. Skip to the next step, if you select VDS as the host switch type.
    Option Description
    Name Enter a name for the NVDS switch.
    Type Indicates the type of switch that is configured.
    Mode
    Choose between these options:
    • Standard: Is the standard mode that is available to all supported hypervisors by NSX-T Data Center.
    • ENS Interrupt: Is a variant of the Enhanced Datapath mode.
    • Enhanced Datapath: Is the mode that provides accelerated networking performance. This mode requires nodes to use VMXNET3 vNIC enabled network cards. It is not supported on KVM, NSX Edge nodes and Public Gateways. The supported hypervisor is ESXi. It is recommended to run ESXi v6.7 U2 and later versions.
    Transport Zone

    Shows the transport zones that are realized by the associated host switches. You cannot add a transport zone if it is not realized by any N-VDS in the transport node profile.

    NIOC Profile Select the NIOC profile from the drop-down menu.

    The bandwidth allocations specified in the profile for the traffic resources are enforced.

    Uplink Profile Select an existing uplink profile from the drop-down menu or create a custom uplink profile.

    You can also use the default uplink profile.

    Multiple N-VDS host switches on a transport node can belong to the same VLAN segment or VTEP pool, or they can belong to different VLAN segments or VTEP IP pools.

    Configuring different transport VLAN segments for different N-VDS host switches allows additional traffic isolation in the underlay as well.

    LLDP Profile By default, NSX-T only receives LLDP packets from a LLDP neighbor.

    However, NSX-T can be set to send LLDP packets to and receive LLDP packets from a LLDP neighbor.

    IP Assignment Select between Use DHCP , Use IP Pool or Use Static IP List to assign an IP address to tunnel endpoints (TEPs) of the transport node.

    If you selected Use IP Pool for an IP assignment, specify the IP pool name and the range of IP addresses that can be used for tunnel endpoints.

    Teaming Policy Uplink Mapping

    Map uplinks defined in the selected NSX-T uplink profile with physical NICs. The number of uplinks that are presented for mapping depends on the uplink profile configuration.

    For example, in the upink-1 (active) row, go to the Physical NICs column, click the edit icon, and type in the name of physical NIC to complete mapping it with uplink-1 (active). Likewise, complete mapping for the other uplinks.

    PNIC only Migration

    Before setting this field, consider the following points:

    • Know whether the physical NIC defined is a used NIC or a free NIC.
    • Determine whether VMkernel interfaces of a host need to be migrated along with physical NICs.

    Set the field:

    • Enable PNIC only Migration if you only want to migrate physical NICs from a VSS or DVS switch to an N-VDS switch.

    • Disable PNIC only Migration if you want to migrate a used physical NIC and its associated VMkernel interface mapping. A free or available physical NIC is attached to the N-VDS switch when a VMkernel interface migration mapping is specified.

    On a host with multiple host switches:
    • If all host switches are to migrate only PNICs, then you can migrate the PNICs in a single operation.
    • If some hosts switches are to migrate VMkernel interfaces and the remaining host switches are to migrate only PNICs:
      1. In the first operation, migrate only PNICs.
      2. In the second operation, migrate VMkernel interfaces. Ensure that PNIC only Migration is disabled.
    Both PNIC only migration and VMkernel interface migration are not supported at the same time across multiple hosts.
    Note: To migrate a management network NIC, configure its associated VMkernel network mapping and keep PNIC only Migration disabled. If you only migrate the management NIC, the host loses connectivity.

    For more information, see VMkernel Migration to an N-VDS Switch.

    Network Mappings for Install

    Click Set and click Add.

    Add VMkernel Adapter or Physical Adapter to VLAN Segments mappings and click Apply.
    • From the VMkernel Adapter drop-down menu, select VMkernel Adapter.
    • From the VLAN Segments drop-down menu, select VLAN Segment.

    The NSX Manager migrates the VMkernel Adapters to the mapped VLAN Segment on N-VDS.

    Caution: Ensure that the management NIC and management VMkernel interface are migrated to a segment that is connected to the same VLAN that the management NIC was connected to before migration. If vmnic <n> and VMkernel <n> are migrated to a different VLAN, then connectivity to the host is lost.
    Caution: For pinned physical NICs, ensure that the host switch mapping of physical NIC to a VMkernel interface matches the configuration specified in the transport node profile. As part of the validation procedure, NSX-T Data Center verifies the mapping and if the validation passes migration of VMkernel interfaces to an N-VDS switch is successful. It is also mandatory to configure the network mapping for uninstallation because NSX-T Data Center does not store the mapping configuration of the host switch after migrating the VMkernel interfaces to the N-VDS switch. If the mapping is not configured, connectivity to services, such as vSAN, can be lost after migrating back to the VSS or VDS switch.

    For more information, see VMkernel Migration to an N-VDS Switch.

    Network Mappings for Uninstall

    Click Set.

    To revert the migration of VMkernels attached to an N-VDS switch during uninstallation. Map VMkernels to port groups on VSS or DVS, so that NSX Manager knows which port group the VMkernel must be migrated back to on the VSS or DVS. For a DVS switch, ensure that the port group is of the type Ephemeral.

    To revert the migration of VMkernels attached to a NSX-T port group created on a vSphere Distributed Switch (VDS) during uninstallation, map VMkernels to port groups on VSS or DVS, so that NSX Manager knows which port group the VMkernel must be migrated back to on the VSS or DVS. For a DVS switch, ensure that the port group is of the type Ephemeral.

    Add VMKNIC to PortGroup mappings and click Apply.
    • In the VMkernel Adapter field, enter VMkernel Adapter name.
    • In the PortGroup field, enter portgroup name of the type Ephemeral.
    Add Physical NIC to uplink profile mappings and click Apply.
    • In the Physical NICs field, enter physical NIC name.
    • In the Uplink field, enter uplink name.
    Caution: For pinned physical NICs, ensure that the transport node profile mapping of physical NIC to VMkernel interface matches the configuration specified in the host switch. It is mandatory to configure the network mapping for uninstallation because NSX-T Data Center does not store the mapping configuration of the host switch after migrating the VMkernel interfaces to the N-VDS switch. If the mapping is not configured, connectivity to services, such as vSAN, can be lost after migrating back to the VSS or VDS switch.

    For more information, see VMkernel Migration to an N-VDS Switch.

  14. Select VDS as the host switch type and enter the switch details.
    Option Description
    vCenter Select the vCenter Server.
    Name Indicates the switch type that will be configured on the host.
    Mode
    Choose between these options:
    • Standard: Is the standard mode that is available to all supported hypervisors by NSX-T Data Center.
    • ENS Interrupt: Is a variant of the Enhanced Datapath mode.
    • Enhanced Datapath: Is the mode that provides accelerated networking performance. This mode requires nodes to use VMXNET3 vNIC enabled network cards. It is not supported on KVM, NSX Edge nodes and Public Gateways. The supported hypervisor is ESXi. It is recommended to run ESXi v6.7 U2 and later versions.
    Name

    (Hosts managed by a vSphere cluster) Select the vCenter Server that manages the host switch.

    Select the VDS that is created in vCenter Server.

    Transport Zones

    Shows the transport zones that are realized by the associated host switches. You cannot add a transport zone if it is not realized by any host switch.

    Uplink Profile Select an existing uplink profile from the drop-down menu or create a custom uplink profile.
    Note: Ensure MTU value entered in the NSX-T Data Center uplink profile and VDS switch is set to at least 1600. If the MTU value in vCenter Server for the VDS switch is lower than the MTU value entered in the uplink profile, then NSX-T Data Center displays an error asking you to enter an appropriate MTU value in the vCenter Server.

    You can also use the default uplink profile.

    Note: Link Aggregation Groups defined in an uplink profile cannot be mapped to VDS uplinks.
    IP Assignment Select between Use DHCP , Use IP Pool or Use Static IP List to assign an IP address to tunnel endpoints (TEPs) of the transport node.

    If you selected Use IP Pool for an IP assignment, specify the IP pool name and the range of IP addresses that can be used for tunnel endpoints.

    Teaming Policy Switch Mapping

    Before you map uplinks profiles in NSX-T with uplinks in VDS, ensure uplinks are configured on the VDS switch. To configure or view the VDS switch uplinks, go to vCenter ServervSphere Distributed Switch. Click Actions → Settings → Edit Settings.

    Map uplinks defined in the selected NSX-T uplink profile with VDS uplinks. The number of NSX-T uplinks that are presented for mapping depends on the uplink profile configuration.

    For example, in the upink-1 (active) row, go to the Physical NICs column, click the edit icon, and type in the name of VDS uplink to complete mapping it with uplink-1 (active). Likewise, complete mapping for the other uplinks.

    Note: For a VDS switch, Uplinks/LAGs, NIOC profile, LLDP profile can be defined only in vSphere ESXi host. These configurations are not available in NSX Manager. In addition, in NSX Manager, you cannot configure networking mapping for install and uninstall if the host switch is a VDS switch. To manage VMkernel adapters on a VDS switch, go to vCenter Server to attach VMkernel adapters to Distributed Virtual port groups or NSX port groups.
  15. Click Add and Finish to begin host transport node preparation.
  16. (Optional) View the ESXi connection status.
    # esxcli network ip connection list | grep 1235
    tcp   0   0  192.168.210.53:20514  192.168.110.34:1234   ESTABLISHED  1000144459  newreno  nsx-proxy
    
  17. From the Host Transport Node page, verify that the NSX Manager connectivity status of hosts in the cluster is Up and NSX-T Data Center configuration state is Success. During the configuration process, each transport node displays the percentage of progress of the installation process. If installation fails at any stage, you can restart the process by clicking the Resolve link that is available against the failed stage of the process.
    You can also see that the transport zone is applied to the hosts in the cluster.
    Note: If you again configure a host that is part of a cluster that is already prepared by a transport node profile, the configuration state of a node is in Configuration Mismatch state.
    Note: If hosts in a cluster are running ESXi v7.0 or later where the host switch is vSphere Distributed Switch, the Host Transport Node page displays TEP addresses of the host in addition to IP addresses. TEP address is the address assigned to the VMkernel NIC of the host, whereas IP address is the management IP address.
  18. (Optional) Remove an NSX-T Data Center VIBs on the host.
    1. Select one or more hosts and click Actions > Remove NSX.
    The uninstallation takes up to three minutes. Uninstallation of NSX-T Data Center removes the transport node configuration on hosts and the host is detached from the transport zone(s) and N-VDS switch. Similar to the installation process, you can follow the percentage of the uninstallation process completed on each transport node. If uninstallation fails at any stage, you can restart the process by clicking the Resolve link that is available against the failed stage of the process.
  19. (Optional) Remove a transport node from the transport zone.
    1. Select a single transport node and click Actions > Remove from Transport Zone.

What to do next

When the hosts are transport nodes, you can create transport zones, logical switches, logical routers, and other network components through the NSX Manager UI or API at any time. When NSX Edge nodes and hosts join the management plane, the NSX-T Data Center logical entities and configuration state are pushed to the NSX Edge nodes and hosts automatically. You can create transport zones, logical switches, logical routers, and other network components through the NSX Manager UI or API at any time. When the hosts are transport nodes, these entities gets realized on the host.

Create a logical switch and assign logical ports. See the Advanced Switching section in the NSX-T Data Center Administration Guide.