Joining the hypervisor hosts with the management plane ensures that the NSX Manager and the hosts can communicate with each other.

Prerequisites

The installation of NSX-T modules must be complete.

Procedure

  1. Open an SSH session to the NSX Manager appliance.
  2. Log in with the Administrator credentials.
  3. Open an SSH session to the hypervisor host.
  4. On the NSX Manager appliance, run the get certificate api thumbprint cli command.

    The command output is a string of numbers that is unique to this NSX Manager.

    For example:

    NSX-Manager1> get certificate api thumbprint
    ...
    
  5. On the hypervisor host, run the nsxcli command to enter the NSX-T CLI.
    Note:

    For KVM, run the command as a superuser (sudo).

    [user@host:~] nsxcli
    host> 
    

    The prompt changes.

  6. On the hypervisor host, run the join management-plane command.

    Provide the following information:

    • Hostname or IP address of the NSX Manager with an optional port number

    • Username of the NSX Manager

    • Certificate thumbprint of the NSX Manager

    • Password of the NSX Manager

    host> join management-plane NSX-Manager1 username admin thumbprint <NSX-Manager1's-thumbprint>
    Password for API user: <NSX-Manager1's-password>
    Node successfully joined

Results

Verify the result by running the get managers command on your hosts.

host> get managers
- 192.168.110.47   Connected

In the NSX Manager UI in Fabric > Node > Hosts, verify that the host's MPA connectivity is Up.

You can view the fabric host's state with the GET https://<nsx-mgr>/api/v1/fabric/nodes/<fabric-node-id>/state API call:

{
  "details": [],
  "state": "success"
}

The management plane sends the host certificates to the control plane, and the management plane pushes control plane information to the hosts.

You should see NSX Controller addresses in /etc/vmware/nsx/controller-info.xml on each ESXi host.

[root@host:~] cat /etc/vmware/nsx/controller-info.xml 
<?xml version="1.0" encoding="utf-8"?>
<config>
  <connectionList>
    <connection id="0">
        <server>10.143.1.47</server>
        <port>1234</port>
        <sslEnabled>true</sslEnabled>
        <pemKey>-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----</pemKey>
    </connection>
    <connection id="1">
        <server>10.143.1.45</server>
        <port>1234</port>
        <sslEnabled>true</sslEnabled>
        <pemKey>-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----</pemKey>
    </connection>
    <connection id="2">
        <server>10.143.1.46</server>
        <port>1234</port>
        <sslEnabled>true</sslEnabled>
        <pemKey>-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----</pemKey>
    </connection>
  </connectionList>
</config>

The host connection to NSX-Ts is initiated and sits in "CLOSE_WAIT" status until the host is promoted to a transport node. You can see this with the esxcli network ip connection list | grep 1234 command.

# esxcli network ip connection list | grep 1234
tcp         0       0  192.168.210.53:45823        192.168.110.34:1234  CLOSE_WAIT    37256  newreno  netcpa
 

For KVM, the command is netstat -anp --tcp | grep 1234.

user@host:~$ netstat -anp --tcp | grep 1234
tcp  0   0 192.168.210.54:57794  192.168.110.34:1234   CLOSE_WAIT -

What to do next

Create a transport zone. See About Transport Zones.