If you have a vCenter Server, you can automate the installation and creation of transport nodes on all the NSX-T Data Center hosts instead of configuring manually.

If the transport node is already configured, then automated transport node creation is not applicable for that node.

Prerequisites

  • Verify that all hosts in the vCenter Server are powered on.
  • Verify that the system requirements are met. See System Requirements.
  • Verify that a transport zone is available. See Create Transport Zones.
  • Verify that a transport node profile is configured. See Add a Transport Node Profile.
  • NSX-T Data Centerinstallation on vSphere fails if your exception list for vSphere lockdown mode includes expired user accounts. Ensure that you delete all expired user accounts before you begin installation. For more information on accounts with access privileges in lockdown mode, see Specifying Accounts with Access Privileges in Lockdown Mode in the vSphere Security Guide.

Procedure

  1. From a browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-ip-address>.
  2. Select System > Fabric > Nodes > Host Transport Nodes.
  3. From the Managed By drop-down menu, select an existing vCenter Server.
    The page lists the available vSphere clusters and/or ESXi hosts from the selected vCenter Server. You may need to expand a cluster to view the ESXi hosts.
  4. Select a single host from the list and click Configure NSX.
    The Configure NSX dialog box opens.
    1. Verify the host name in the Host Details panel. Optionally, you can add a description.
    2. Click Next to move to the Configure NSX panel.
    3. Select the available transport zones and click the > button to include the transport zones in the transport node profile.
  5. Verify the host name in the Host Details panel, and click Next.
    Optionally, you can add a description.
  6. In the Configure NSX panel, select the desired transport zones.
    You can select more than one transport zone.
  7. Click the N-VDS tab and enter the switch details.
    Option Description
    N-VDS Name

    If the transport node is attached to a transport zone, then ensure the name entered for the N-VDS is the same as the N-VDS name specified in the transport zone. A transport node can be created without attaching it to a transport zone.

    Associated Transport Zones

    Shows the transport zones that are realized by the associated host switches. You cannot add a transport zone if it is not realized by any N-VDS in the transport node profile.

    NIOC Profile Select the NIOC profile from the drop-down menu.

    The bandwidth allocations specified in the profile for the traffic resources are enforced.

    Uplink Profile Select an existing uplink profile from the drop-down menu or create a custom uplink profile.

    You can also use the default uplink profile.

    LLDP Profile By default, NSX-T only receives LLDP packets from a LLDP neighbor.

    However, NSX-T can be set to send LLDP packets to and receive LLDP packets from a LLDP neighbor.

    IP Assignment Select Use DHCP, Use IP Pool, or Use Static IP List to assign an IP address to virtual tunnel endpoints (VTEPs) of the transport node.

    If you select Use Static IP List, you must specify a list of comma-separated IP addresses, a gateway, and a subnet mask. All the VTEPs of the transport node must be in the same subnet otherwise bidirectional flow (BFD) session is not established.

    IP Pool If you selected Use IP Pool for an IP assignment, specify the IP pool name.
    Physical NICs

    Add physical NICs to the transport node. You can use the default uplink or assign an existing uplink from the drop-down menu.

    Click Add PNIC to configure additional physical NICs to the transport node.

    Note: Migration of the physical NICs that you add in this field depends on how you configure PNIC only Migration, Network Mappings for Install, and Network Mappings for Uninstall.
    • To migrate a used physical NIC (for example, by a standard vSwitch or a vSphere distributed switch) without an associated VMkernel mapping, ensure that PNIC only Migration is enabled. Otherwise, the transport node state remains in partial success, and the fabric node LCP connectivity fails to establish.
    • To migrate a used physical NIC with an associated VMkernel network mapping, disable PNIC only Migration and configure the VMkernel network mapping.

    • To migrate a free physical NIC, enable PNIC only Migration.

    PNIC only Migration

    Before setting this field, consider the following points:

    • Know whether the physical NIC defined is a used NIC or a free NIC.
    • Determine whether VMkernel interfaces of a host need to be migrated along with physical NICs.

    Set the field:

    • Enable PNIC only Migration if you only want to migrate physical NICs from a VSS or DVS switch to an N-VDS switch.

    • Disable PNIC only Migration if you want to migrate a used physical NIC and its associated VMkernel interface mapping. A free or available physical NIC is attached to the N-VDS switch when a VMkernel interface migration mapping is specified.

    On a host with multiple host switches:
    • If all host switches are to migrate only PNICs, then you can migrate the PNICs in a single operation.
    • If some hosts switches are to migrate VMkernel interfaces and the remaining host switches are to migrate only PNICs:
      1. In the first operation, migrate only PNICs.
      2. In the second operation, migrate VMkernel interfaces. Ensure that PNIC only Migration is disabled.

    Both PNIC only migration and VMkernel interface migration are not supported at the same time across multiple hosts.

    Note: To migrate a management network NIC, configure its associated VMkernel network mapping and keep PNIC only Migration disabled. If you only migrate the management NIC, the host loses connectivity.

    For more information, see VMkernel Migration to an N-VDS Switch.

    Network Mappings for Install

    To migrate VMkernels to N-VDS switch during installation, map VMkernels to an existing logical switch. The NSX Manager migrates the VMkernel to the mapped logical switch on N-VDS.

    Caution: Ensure that the management NIC and management VMkernel interface are migrated to a logical switch that is connected to the same VLAN that the management NIC was connected to before migration. If vmnic <n> and VMkernel <n> are migrated to a different VLAN, then connectivity to the host is lost.
    Caution: For pinned physical NICs, ensure that the host switch mapping of physical NIC to a VMkernel interface matches the configuration specified in the transport node profile. As part of the validation procedure, NSX-T Data Center verifies the mapping and if the validation passes migration of VMkernel interfaces to an N-VDS switch is successful. It is also mandatory to configure the network mapping for uninstallation because NSX-T Data Center does not store the mapping configuration of the host switch after migrating the VMkernel interfaces to the N-VDS switch. If the mapping is not configured, connectivity to services, such as vSAN, can be lost after migrating back to the VSS or VDS switch.

    For more information, see VMkernel Migration to an N-VDS Switch.

    Network Mappings for Uninstall

    To revert the migration of VMkernels during uninstallation, map VMkernels to port groups on VSS or DVS, so that NSX Manager knows which port group the VMkernel must be migrated back to on the VSS or DVS. For a DVS switch, ensure that the port group is of the type Ephemeral.

    Caution: For pinned physical NICs, ensure that the transport node profile mapping of physical NIC to VMkernel interface matches the configuration specified in the host switch. It is mandatory to configure the network mapping for uninstallation because NSX-T Data Center does not store the mapping configuration of the host switch after migrating the VMkernel interfaces to the N-VDS switch. If the mapping is not configured, connectivity to services, such as vSAN, can be lost after migrating back to the VSS or VDS switch.

    For more information, see VMkernel Migration to an N-VDS Switch.

  8. If you have selected multiple transport zones, click + ADD N-VDS again to configure the switch for the other transport zones.
  9. Click Finish to complete the configuration.
  10. (Optional) View the ESXi connection status.
    # esxcli network ip connection list | grep 1235
    tcp   0   0  192.168.210.53:20514  192.168.110.34:1234   ESTABLISHED  1000144459  newreno  netcpa
    
  11. From the Host Transport Node page, verify that the NSX Manager connectivity status of hosts in the cluster is Up and NSX-T Data Center configuration state is Success.
    You can also see that the transport zone is applied to the hosts in the cluster.
  12. (Optional) Remove an NSX-T Data Center installation and transport node from a host in the transport zone.
    1. Select one or more hosts and click Actions > Remove NSX.
    The uninstallation takes up to three minutes. Uninstallation of NSX-T Data Center removes the transport node configuration on hosts and the host is detached from the transport zone(s) and N-VDS switch. Any new host added to the vCenter Server cluster will not be automatically configured until the transport node profile is reapplied to the cluster.
  13. (Optional) Remove a transport node from the transport zone.
    1. Select a single transport node and click Actions > Remove from Transport Zone.

What to do next

Create a logical switch and assign logical ports. See the Advanced Switching section in the NSX-T Data Center Administration Guide.