You must first add your ESXi host, KVM host, or bare metal server to the NSX-T Data Center fabric and then configure the transport node.

For a host or bare metal server to be part of the NSX-T Data Center overlay, it must first be added to the NSX-T Data Center fabric.

A transport node is a node that participates in an NSX-T Data Center overlay or NSX-T Data Center VLAN networking.

For a KVM host or bare metal server, you can preconfigure the N-VDS, or you can have NSX Manager perform the configuration. For a ESXi host, NSX Manager always configures the N-VDS.

Note: If you plan to create transport nodes from a template VM, make sure that there are no certificates on the host in /etc/vmware/nsx/. nsx-proxy does not create a certificate if a certificate exists.

Bare metal server supports an overlay and VLAN transport zone. You can use the management interface to manage the bare metal server. The application interface allows you to access the applications on the bare metal server.

Single physical NICs provide an IP address for both the management and application IP interfaces.

Dual physical NICs provide a physical NIC and a unique IP address for the management interface. Dual physical NICs also provide a physical NIC, and a unique IP address for the application interface.

Multiple physical NICs in a bonded configuration provide dual physical NICs, and a unique IP address for both the management interface and the application insterface.

You can add a maximum of four N-VDS switches for each configuration:
  • standard N-VDS created for VLAN transport zone
  • enhanced N-VDS created for VLAN transport zone
  • standard N-VDS created for overlay transport zone
  • enhanced N-VDS created for overlay transport zone

In a single host cluster topology running multiple standard overlay N-VDS switches and edge VM on the same host, NSX-T Data Center provides traffic isolation such that traffic going through the first N-VDS is isolated from traffic going through the second N-VDS. The physical NICs on each N-VDS must be mapped to the edge VM on the host to allow the north-south traffic connectivity with the external world. Packets moving out of a VM on the first transport zone must be routed through an external router, or an external VM to the VM on the second transport zone.

Prerequisites

  • The host must be joined with the management plane, and connectivity must be Up.
  • The reverse proxy service on all nodes of the NSX Manager cluster must be Up and running.

    To verify, run get service http. If the service is down, restart the service by running restart service http on each NSX Manager node. If the service is still down, contact VMware support.

  • A transport zone must be configured.
  • An uplink profile must be configured, or you can use the default uplink profile.
  • An IP pool must be configured, or DHCP must be available in the network deployment.
  • At least one unused physical NIC must be available on the host node.
  • Hostname
  • Management IP address
  • User name
  • Password
  • (Optional) (KVM) SHA-256 SSL thumbprint
  • (Optional) (ESXi) SHA-256 SSL thumbprint
  • Verify that the required third-party packages are installed.

Procedure

  1. (Optional) Retrieve the hypervisor thumbprint so that you can provide it when adding the host to the fabric.
    1. Gather the hypervisor thumbprint information.
      Use a Linux shell.
      # echo -n | openssl s_client -connect <esxi-ip-address>:443 2>/dev/null | openssl x509 -noout -fingerprint -sha256
      
      Use the ESXi CLI in the host.
      [root@host:~] openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout
      SHA256 Fingerprint=49:73:F9:A6:0B:EA:51:2A:15:57:90:DE:C0:89:CA:7F:46:8E:30:15:CA:4D:5C:95:28:0A:9E:A2:4E:3C:C4:F4
    2. Retrieve the SHA-256 thumbprint from a KVM hypervisor, run the command in the KVM host.
      # awk '{print $2}' /etc/ssh/ssh_host_rsa_key.pub | base64 -d | sha256sum -b | sed 's/ .*$//' | xxd -r -p | base64
  2. Select System > Fabric > Nodes > Host Transport Nodes.
  3. From the Managed by field, select Standalone Hosts and click + Add.
  4. Enter the standalone host or bare metal server details to add to the fabric.
    Option Description
    Name and Description Enter the name to identify the standalone host or bare metal server.

    You can optionally add the description of the operating system used for the host or bare metal server.

    IP Addresses Enter the host or bare metal server IP address.
    Operating System Select the operating system from the drop-down menu.

    Depending on your host or bare metal server, you can select any of the supported operating systems. See System Requirements.

    Important: Among the different flavors of Linux supported, you must know the distinction between a bare metal server running a Linux distribution versus using a Linux distribution as a hypervisor host. For example, selecting Ubuntu Server as the operating system means setting up a bare metal server running a Linux server, whereas selcting Ubuntu KVM means the Linux hypervisior deployed is Ubuntu.
    Username and Password Enter the host user name and password.
    SHA-256 Thumbprint Enter the host thumbprint value for authentication.

    If you leave the thumbprint value empty, you are prompted to accept the server provided value. It takes a few seconds for NSX-T Data Center to discover and authenticate the host.

  5. (Required) For a KVM host or bare metal server, select the N-VDS type.
    Option Description
    NSX Created NSX Manager creates the N-VDS.

    This option is selected by default.

    Preconfigured The N-VDS is already configured.
    For an ESXi host, the N-VDS type is always set to NSX Created.
  6. If you select the N-VDS switch to operate in Standard (All hosts) mode, enter value to the following fields. You can configure multiple N-VDS switches on a single host.
    Option Description
    Name Enter a name for the N-VDS host switch.
    Transport Zone From the drop-down menu, select a transport zone that this transport node.
    NIOC Profile From the drop-down menu, select an NIOC profile for the ESXi host or create a custom NIOC profile.

    You can also select the default NIOC profile.

    Uplink Profile Select an existing uplink profile from the drop-down menu or create a custom uplink profile.

    You can also use the default uplink profile.

    LLDP Profile By default, NSX-T only receives LLDP packets from a LLDP neighbor.

    However, NSX-T can be set to send LLDP packets to and receive LLDP packets from a LLDP neighbor.

    IP Assignment Select Use DHCP, Use IP Pool, or Use Static IP List.

    If you select Use Static IP List, you must specify a list of comma-separated IP addresses, a gateway, and a subnet mask.

    IP Pool If you selected Use IP Pool for IP assignment, specify the IP pool name.
    Teaming Policy Switch Mapping

    Add physical NICs to the transport node. You can use the default uplink or assign an existing uplink from the drop-down menu.

    PNIC only Migration

    Before setting this field, consider the following points:

    • Know whether the physical NIC defined is a used NIC or a free NIC.
    • Determine whether VMkernel interfaces of a host need to be migrated along with physical NICs.

    Set the field:

    • Enable PNIC only Migration if you only want to migrate physical NICs from a VSS or DVS switch to an N-VDS switch.

    • Disable PNIC only Migration if you want to migrate a used physical NIC and its associated VMkernel interface mapping. A free or available physical NIC is attached to the N-VDS switch when a VMkernel interface migration mapping is specified.

    On a host with multiple host switches:
    • If all host switches are to migrate only PNICs, then you can migrate the PNICs in a single operation.
    • If some hosts switches are to migrate VMkernel interfaces and the remaining host switches are to migrate only PNICs:
      1. In the first operation, migrate only PNICs.
      2. In the second operation, migrate VMkernel interfaces. Ensure that PNIC only Migration is disabled.

    Both PNIC only migration and VMkernel interface migration are not supported at the same time across multiple hosts.

    Note: To migrate a management network NIC, configure its associated VMkernel network mapping and keep PNIC only Migration disabled. If you only migrate the management NIC, the host loses connectivity.

    For more information, see VMkernel Migration to an N-VDS Switch.

    Network Mappings for Install

    To migrate VMkernels to N-VDS switch during installation, map VMkernels to an existing logical switch. The NSX Manager migrates the VMkernel to the mapped logical switch on N-VDS.

    Caution: Ensure that the management NIC and management VMkernel interface are migrated to a logical switch that is connected to the same VLAN that the management NIC was connected to before migration. If vmnic <n> and VMkernel <n> are migrated to a different VLAN, then connectivity to the host is lost.
    Caution: For pinned physical NICs, ensure that the host switch mapping of physical NIC to a VMkernel interface matches the configuration specified in the transport node profile. As part of the validation procedure, NSX-T Data Center checks the mapping and if the validation passes migration of VMkernel interfaces to an N-VDS switch is successful. At the same time it is mandatory to configure the network mapping for uninstallation because NSX-T Data Center does not store the mapping configuration of the host switch after migrating the VMkernel interfaces to the N-VDS switch. If the mapping is not configured, connectivity to services, such as vSAN, can be lost after migrating back to the VSS or VDS switch.

    For more information, see VMkernel Migration to an N-VDS Switch.

    Network Mappings for Uninstall

    To revert the migration of VMkernels during uninstallation, map VMkernels to port groups on VSS or DVS, so that NSX Manager knows which port group the VMkernel must be migrated back to on the VSS or DVS. For a DVS switch, ensure the port group is of the type Ephemeral.

    To revert the migration of VMkernels attached to a NSX-T port group created on a vSphere Distributed Virtual Switch 7.0 during uninstallation, map VMkernels to port groups on VSS or DVS, so that NSX Manager knows which port group the VMkernel must be migrated back to on the VSS or DVS. For a DVS switch, ensure that the port group is of the type Ephemeral.

    Caution: For pinned physical NICs, ensure that the transport node profile mapping of physical NIC to VMkernel interface matches the configuration specified in the host switch. It is mandatory to configure the network mapping for uninstallation because NSX-T Data Center does not store the mapping configuration of the host switch after migrating the VMkernel interfaces to the N-VDS switch. If the mapping is not configured, connectivity to services, such as vSAN, can be lost after migrating back to the VSS or VDS switch.

    For more information, see VMkernel Migration to an N-VDS Switch.

  7. If you select the N-VDS switch to operate in Performance mode, enter values to the following additional fields. You can configure multiple N-VDS switches on a single host.
    Option Description
    (CPU Config)

    NUMA Node Index

    In the NUMA Node Index drop-down menu, select the NUMA node that you want to assign to an N-VDS switch. The first NUMA node present on the node is represented with the value 0.

    You can find out the number for NUMA nodes on your host by running the esxcli hardware memory get command.

    Note: If you want to change the number of NUMA nodes that have affinity with an N-VDS switch, you can update the NUMA Node Index value.
    (CPU Config)

    LCores per NUMA Nodes

    In the Lcore per NUMA node drop-down menu, select the number of logical cores that must be used by enhanced datapath.

    You can find out the maximum number of logical cores that can be created on the NUMA node by running the esxcli network ens maxLcores get command.

    Note: If you exhaust the available NUMA nodes and logical cores, any new switch added to the transport node cannot be enabled for ENS traffic.
  8. For a preconfigured N-VDS, provide the following details.
    Option Description
    N-VDS External ID Must be the same as the N-VDS name of the transport zone that this node belongs to.
    VTEP Virtual tunnel endpoint name.
  9. View the connection status on the Host Transport Nodes page. During the configuration process, each transport node displays the percentage of progress of the installation process. If installation fails at any stage, you can restart the process by clicking the Resolve link that is available against the failed stage of the process.
    After adding the host or bare metal server as a transport node, the connection to NSX Manager state displays as UP only after the host is successfully created as a transport node.
  10. Alternatively, view the connection status using CLI commands.
    • For ESXi, enter the esxcli network ip connection list | grep 1234 command.
      # esxcli network ip connection list | grep 1234
      tcp   0   0  192.168.210.53:20514  192.168.110.34:1234   ESTABLISHED  1000144459  newreno  nsx-cfgagent
       
      
    • For KVM, enter the command netstat -anp --tcp | grep 1234.
      user@host:~$ netstat -anp --tcp | grep 1234
      tcp  0   0 192.168.210.54:57794  192.168.110.34:1234   ESTABLISHED -
    • For Windows, from a command prompt enter netstat | find "1234"
    • For Windows, from a command prompt enter netstat | find "1235"
  11. Verify that the NSX-T Data Center modules are installed on your host or bare metal server.
    As a result of adding a host or bare metal server to the NSX-T Data Center fabric, a collection of NSX-T Data Center modules are installed on the host or bare metal server.

    The modules on different hosts are packaged as follows:

    • KVM on RHEL, CentOS, Oracle Linux, or SUSE - RPMs.
    • KVM on Ubuntu - DEBs
    • On ESXi, enter the command esxcli software vib list | grep nsx.

      The date is the day you performed the installation.

    • On RHEL, CentOS, or Oracle Linux, enter the command yum list installed or rpm -qa.
    • On Ubuntu, enter the command dpkg --get-selections.
    • On SUSE, enter the command rpm -qa | grep nsx.
    • On Windows, open Task Manager. Or, from the command line enter tasklist /V | grep nsx findstr “nsx ovs
  12. (Optional) Change the polling intervals of certain processes, if you have 500 hypervisors or more.
    The NSX Manager might experience high CPU use and performance problems if there are more than 500 hypervisors.
    1. Use the NSX-T Data Center CLI command copy file or the API POST /api/v1/node/file-store/<file-name>?action=copy_to_remote_file to copy the aggsvc_change_intervals.py script to a host.
    2. Run the script, which is located in the NSX-T Data Center file store.
      python aggsvc_change_intervals.py -m '<NSX ManagerIPAddress>' -u 'admin' -p '<password>' -i 900
    3. (Optional) Change the polling intervals back to their default values.
      python aggsvc_change_intervals.py -m '<NSX ManagerIPAddress>' -u 'admin' -p '<password>' -r

Results

Note: For an NSX-T Data Center created N-VDS, after the transport node is created, if you want to change the configuration, such as IP assignment to the tunnel endpoint, you must do it through the NSX Manager GUI and not through the CLI on the host.

What to do next

Migrate network interfaces from a vSphere Standard Switch to an N-VDS switch. See VMkernel Migration to an N-VDS Switch.