Joining the hypervisor hosts with the management plane ensures that the NSX Manager and the hosts can communicate with each other.


The installation of NSX-T Data Center modules must be complete.


  1. Open an SSH session to the NSX Manager appliance.
  2. Log in with the Administrator credentials.
  3. Open an SSH session to the hypervisor host.
  4. On the NSX Manager appliance, run the get certificate api thumbprint cli command.

    The command output is a string of numbers that is unique to this NSX Manager.

    For example:

    NSX-Manager1> get certificate api thumbprint
  5. On the hypervisor host, run the nsxcli command to enter the NSX-T Data Center CLI.

    For KVM, run the command as a superuser (sudo).

    [user@host:~] nsxcli

    The prompt changes.

  6. On the hypervisor host, run the join management-plane command.

    Provide the following information:

    • Hostname or IP address of the NSX Manager with an optional port number

    • Username of the NSX Manager

    • Certificate thumbprint of the NSX Manager

    • Password of the NSX Manager

    host> join management-plane NSX-Manager1 username admin thumbprint <NSX-Manager1's-thumbprint>
    Password for API user: <NSX-Manager1's-password>
    Node successfully joined


Verify the result by running the get managers command on your hosts.

host> get managers
-   Connected

In the NSX Manager UI in Fabric > Node > Hosts, verify that the host's MPA connectivity is Up.

You can also view the fabric host's state with the GET https://<nsx-mgr>/api/v1/fabric/nodes/<fabric-node-id>/state API call:

  "details": [],
  "state": "success"

The management plane sends the host certificates to the control plane, and the control plane pushes control plane information to the hosts.

You should see NSX Controller addresses in /etc/vmware/nsx/controller-info.xml on each ESXi host or access the CLI using get controllers.

[root@host:~] cat /etc/vmware/nsx/controller-info.xml 
<?xml version="1.0" encoding="utf-8"?>
    <connection id="0">
        <pemKey>-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----</pemKey>
    <connection id="1">
        <pemKey>-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----</pemKey>
    <connection id="2">
        <pemKey>-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----</pemKey>

The host connection to NSX-T Data Centers is initiated and sits in "CLOSE_WAIT" status until the host is promoted to a transport node. You can see this with the esxcli network ip connection list | grep 1234 command.

# esxcli network ip connection list | grep 1234
tcp         0       0  CLOSE_WAIT    37256  newreno  netcpa

For KVM, the command is netstat -anp --tcp | grep 1234.

user@host:~$ netstat -anp --tcp | grep 1234
tcp  0   0   CLOSE_WAIT -

What to do next

Create a transport zone. See About Transport Zones.