A fabric node is a node that has been registered with the NSX-T Data Center management plane and has NSX-T Data Center modules installed. For a hypervisor host or a bare metal server to be part of the NSX-T Data Center overlay, it must first be added to the NSX-T Data Center fabric.

You can skip this procedure if you installed the modules on the hosts manually and joined the hosts to the management plane using the CLI.


For a KVM host on RHEL, you can use sudo credentials to perform host preparation activities.


  • For each host that you plan to add to the NSX-T Data Center fabric, first gather the following host information:

    • Hostname

    • Management IP address

    • Username

    • Password

    • (Optional) (KVM) SHA-256 SSL thumbprint

    • (Optional) (ESXi) SHA-256 SSL thumbprint

  • For Ubuntu, verify that the required third-party packages are installed. See Install Third-Party Packages on a KVM Host or Bare Metal Server.


  1. (Optional) Retrieve the hypervisor thumbprint so that you can provide it when adding the host to the fabric.
    1. Gather the hypervisor thumbprint information.

      Use a Linux shell.

      # echo -n | openssl s_client -connect <esxi-ip-address>:443 2>/dev/null | openssl x509 -noout -fingerprint -sha256

      Use the vSphere ESXi CLI in the host.

      [root@host:~] openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout
      SHA256 Fingerprint=49:73:F9:A6:0B:EA:51:2A:15:57:90:DE:C0:89:CA:7F:46:8E:30:15:CA:4D:5C:95:28:0A:9E:A2:4E:3C:C4:F4

    2. Retrieve the SHA-256 thumbprint from a KVM hypervisor, run the command in the KVM host.
      # awk '{print $2}' /etc/ssh/ssh_host_rsa_key.pub | base64 -d | sha256sum -b | sed 's/ .*$//' | xxd -r -p | base64
  2. In the NSX Manager CLI, verify that the install-upgrade service is running.
    nsx-manager-1> get service install-upgrade
    Service name: install-upgrade
    Service state: running
    Enabled: True
  3. From a browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-ip-address>.
  4. Select Fabric > Nodes > Hosts and click Add.
  5. Enter the hostname, IP address, username, password, and the optional thumbprint.

    For example:

    For bare metal server, you can select the RHEL Server, Ubuntu Server, or CentOS Server from the Operating System drop-down menu.

    If you do not enter the host thumbprint, the NSX-T Data Center UI prompts you to use the default thumbprint in the plain text format retrieved from the host.

    For example:

    When a host is successfully added to the NSX-T Data Center fabric, the NSX Manager Hosts page displays Deployment Status: Installation Successful and MPA Connectivity: Up.

    LCP Connectivity remains unavailable until after you have made the fabric node into a transport node.

  6. Verify that the NSX-T Data Center modules are installed on your host or bare metal server.

    As a result of adding a host or bare metal server to the NSX-T Data Center fabric, a collection of NSX-T Data Center modules are installed on the host or bare metal server.

    On vSphere ESXi, the modules are packaged as VIBs. For KVM or bare metal server on RHEL, they are packaged as RPMs. For KVM or bare metal server on Ubuntu, they are packaged as DEBs.

    • On ESXi, type the command esxcli software vib list | grep nsx.

      The date is the day that you performed the installation

    • On RHEL, type the command yum list installed or rpm -qa.

    • On Ubuntu, type the command dpkg --get-selections.

  7. (Optional) View the fabric nodes with the GET https://<nsx-mgr>/api/v1/fabric/nodes/<node-id> API call.
  8. (Optional) Monitor the status in the API with the GET https://<nsx-mgr>/api/v1/fabric/nodes/<node-id>/status API call.
  9. (Optional) Change the polling intervals of certain processes, if you have 500 hypervisors or more.

    The NSX Manager might experience high CPU usage and performance problems if there are more than 500 hypervisors.

    1. Use the NSX-T Data Center CLI command copy file or the API POST /api/v1/node/file-store/<file-name>?action=copy_to_remote_file to copy the aggsvc_change_intervals.py script to a host.
    2. Run the script, which is located in the NSX-T Data Center file store.
      python aggsvc_change_intervals.py -m '<NSX Manager IP address>' -u 'admin' -p '<password>' -i 900
    3. (Optional) Change the polling intervals back to their default values.
      python aggsvc_change_intervals.py -m '<NSX Manager IP address>' -u 'admin' -p '<password>' -r

What to do next

Create a transport zone. See About Transport Zones.