You must first add your ESXi host, KVM host to the NSX-T Data Center fabric and then configure the transport node.

For a host to be part of the NSX-T Data Center overlay, it must first be added to the NSX-T Data Center fabric.

A transport node is a node that participates in an NSX-T Data Center overlay or NSX-T Data Center VLAN networking.

For a KVM host, you can preconfigure the N-VDS, or you can have NSX Manager perform the configuration. For a ESXi host, NSX Manager always configures the N-VDS.

Note: If you plan to create transport nodes from a template VM, make sure that there are no certificates on the host in /etc/vmware/nsx/. nsx-proxy does not create a certificate if a certificate exists.
You can add a maximum of four N-VDS switches for each configuration:
  • standard N-VDS created for VLAN transport zone
  • enhanced N-VDS created for VLAN transport zone
  • standard N-VDS created for overlay transport zone
  • enhanced N-VDS created for overlay transport zone

In a single host cluster topology running multiple standard overlay N-VDS switches and edge VM on the same host, NSX-T Data Center provides traffic isolation such that traffic going through the first N-VDS is isolated from traffic going through the second N-VDS. The physical NICs on each N-VDS must be mapped to the edge VM on the host to allow the north-south traffic connectivity with the external world. Packets moving out of a VM on the first transport zone must be routed through an external router, or an external VM to the VM on the second transport zone.

Prerequisites

  • The host must be joined with the management plane, and connectivity must be Up.
  • The reverse proxy service on all nodes of the NSX Manager cluster must be Up and running.

    To verify, run get service http. If the service is down, restart the service by running restart service http on each NSX Manager node. If the service is still down, contact VMware support.

  • A transport zone must be configured.
  • An uplink profile must be configured, or you can use the default uplink profile.
  • An IP pool must be configured, or DHCP must be available in the network deployment.
  • At least one unused physical NIC must be available on the host node.
  • Hostname
  • Management IP address
  • User name
  • Password
  • (Optional) (KVM) SHA-256 SSL thumbprint
  • (Optional) (ESXi) SHA-256 SSL thumbprint
  • Verify that the required third-party packages are installed.

Procedure

  1. (Optional) Retrieve the hypervisor thumbprint so that you can provide it when adding the host to the fabric.
    1. Gather the hypervisor thumbprint information.
      Use a Linux shell.
      # echo -n | openssl s_client -connect <esxi-ip-address>:443 2>/dev/null | openssl x509 -noout -fingerprint -sha256
      
      Use the ESXi CLI in the host.
      [root@host:~] openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout
      SHA256 Fingerprint=49:73:F9:A6:0B:EA:51:2A:15:57:90:DE:C0:89:CA:7F:46:8E:30:15:CA:4D:5C:95:28:0A:9E:A2:4E:3C:C4:F4
    2. Retrieve the SHA-256 thumbprint from a KVM hypervisor, run the command in the KVM host.
      # awk '{print $2}' /etc/ssh/ssh_host_rsa_key.pub | base64 -d | sha256sum -b | sed 's/ .*$//' | xxd -r -p | base64
  2. Select System > Fabric > Nodes > Host Transport Nodes.
  3. From the Managed by field, select Standalone Hosts and click + Add Host Node.
  4. On the Host Details page, enter details for the following fields.
    Option Description
    Name and Description Enter the name to identify the standalone host.

    You can optionally add the description of the operating system used for the host.

    IP Addresses Enter the host IP address.
    Operating System Select the operating system from the drop-down menu.

    Depending on your host, you can select any of the supported operating systems. See System Requirements.

    Username and Password Enter the host user name and password.
    SHA-256 Thumbprint Enter the host thumbprint value for authentication.

    If you leave the thumbprint value empty, you are prompted to accept the server provided value. It takes a few seconds for NSX-T Data Center to discover and authenticate the host.

  5. (Required) For a KVM host, select the N-VDS type.
    Option Description
    NSX Created NSX Manager creates the N-VDS.

    This option is selected by default.

    Preconfigured The N-VDS is already configured.
    For an ESXi host, the N-VDS type is always set to NSX Created.
  6. On the Configure NSX page, enter details for the following fields. You can configure multiple N-VDS switches on a single host.
    Option Description
    Name Enter a name for the N-VDS host switch.
    Transport Zone From the drop-down menu, select a transport zone that this transport node.
    Uplink Profile Select an existing uplink profile from the drop-down menu or create a custom uplink profile. You can also use the default uplink profile.

    Multiple N-VDS host switches on a transport node can belong to the same VLAN segment or VTEP pool, or they can belong to different VLAN segments or VTEP IP pools.

    Configuring different transport VLAN segments for different N-VDS host switches allows additional traffic isolation in the underlay as well.

    LLDP Profile By default, NSX-T only receives LLDP packets from a LLDP neighbor.

    However, NSX-T can be set to send LLDP packets to and receive LLDP packets from a LLDP neighbor.

    Uplinks-Physical NICs Mapping

    Map uplinks to physical NICs.

  7. For a preconfigured N-VDS, provide the following details.
    Option Description
    N-VDS External ID Must be the same as the N-VDS name of the transport zone that this node belongs to.
    VTEP Virtual tunnel endpoint name.
  8. View the connection status on the Host Transport Nodes page. During the configuration process, each transport node displays the percentage of progress of the installation process. If installation fails at any stage, you can restart the process by clicking the Resolve link that is available against the failed stage of the process.
    After adding the host as a transport node, the connection to NSX Manager state displays as UP only after the host is successfully created as a transport node.
  9. Alternatively, view the connection status using CLI commands.
    • For ESXi, enter the esxcli network ip connection list | grep 1234 command.
      # esxcli network ip connection list | grep 1234
      tcp   0   0  192.168.210.53:20514  192.168.110.34:1234   ESTABLISHED  1000144459  newreno  nsx-cfgagent
       
      
    • For KVM, enter the command netstat -anp --tcp | grep 1234.
      user@host:~$ netstat -anp --tcp | grep 1234
      tcp  0   0 192.168.210.54:57794  192.168.110.34:1234   ESTABLISHED -
    • For Windows, from a command prompt enter netstat | find "1234"
    • For Windows, from a command prompt enter netstat | find "1235"
  10. Verify that the NSX-T Data Center modules are installed on your host.
    As a result of adding a host to the NSX-T Data Center fabric, a collection of NSX-T Data Center modules are installed on the host.

    The modules on different hosts are packaged as follows:

    • KVM on RHEL, CentOS, Oracle Linux, or SUSE - RPMs.
    • KVM on Ubuntu - DEBs
    • On ESXi, enter the command esxcli software vib list | grep nsx.

      The date is the day you performed the installation.

    • On RHEL, CentOS, or Oracle Linux, enter the command yum list installed or rpm -qa.
    • On Ubuntu, enter the command dpkg --get-selections.
    • On SUSE, enter the command rpm -qa | grep nsx.
    • On Windows, open Task Manager. Or, from the command line enter tasklist /V | grep nsx findstr “nsx ovs
  11. (Optional) Change the polling intervals of certain processes, if you have 500 hypervisors or more.
    The NSX Manager might experience high CPU use and performance problems if there are more than 500 hypervisors.
    1. Use the NSX-T Data Center CLI command copy file or the API POST /api/v1/node/file-store/<file-name>?action=copy_to_remote_file to copy the aggsvc_change_intervals.py script to a host.
    2. Run the script, which is located in the NSX-T Data Center file store.
      python aggsvc_change_intervals.py -m '<NSX ManagerIPAddress>' -u 'admin' -p '<password>' -i 900
    3. (Optional) Change the polling intervals back to their default values.
      python aggsvc_change_intervals.py -m '<NSX ManagerIPAddress>' -u 'admin' -p '<password>' -r

Results

Note: For an NSX-T Data Center created N-VDS, after the transport node is created, if you want to change the configuration, such as IP assignment to the tunnel endpoint, you must do it through the NSX Manager GUI and not through the CLI on the host.

What to do next

Migrate network interfaces from a vSphere Standard Switch to an N-VDS switch. See VMkernel Migration to an N-VDS Switch.