A fabric node is a node that has been registered with the NSX-T management plane and has NSX-T modules installed. For a hypervisor host to be part of the NSX-T overlay, it must first be added to the NSX-T fabric.

About this task

Note:

You can skip this procedure if you installed the modules on the hosts manually and joined the hosts to the management plane using the CLI.

Prerequisites

  • For each host that you plan to add to the NSX-T fabric, first gather the following host information:

    • Hostname

    • Management IP address

    • Username

    • Password

    • (KVM) SHA-256 SSL thumbprint

    • (ESXi) SHA-256 SSL thumbprint

  • Optionally, retrieve the hypervisor thumbprint so that you can provide it when adding the host to the fabric.

    • One method to gather the information is to run the following command in a Linux shell:

      # echo -n | openssl s_client -connect <esxi-ip-address>:443 2>/dev/null | openssl x509 -noout -fingerprint -sha256
      
    • Another method uses the ESXi CLI in the ESX host:

      [root@host:~] openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout
      SHA256 Fingerprint=49:73:F9:A6:0B:EA:51:2A:15:57:90:DE:C0:89:CA:7F:46:8E:30:15:CA:4D:5C:95:28:0A:9E:A2:4E:3C:C4:F4

    • To retrieve the SHA-256 thumbprint from a KVM hypervisor, run the command in the KVM host.

      # awk '{print $2}' /etc/ssh/ssh_host_rsa_key.pub | base64 -d | sha256sum -b | sed 's/ .*$//' | xxd -r -p | base64

  • For Ubuntu, verify that the required third-party packages are installed. See Install Third-Party Packages on a KVM Host.

Procedure

  1. In the NSX Manager CLI, verify that the install-upgrade service is running.
    nsx-manager-1> get service install-upgrade
    
    Service name: install-upgrade
    Service state: running
    Enabled: True
  2. From a browser, log in to an NSX Manager at https://<nsx-manager-ip-address>.
  3. Select Fabric > Nodes > Hosts and click Add.
  4. Enter the hostname, IP address, username, password, and the optional thumbprint.

    For example:

    If you do not enter the host thumbprint, the NSX-T UI prompts you to use the default thumbprint in the plain text format retrieved from the host.

    For example:

    When a host is successfully added to the NSX-T fabric, the NSX Manager Fabric > Nodes > Hosts UI displays Deployment Status: Installation Successful and MPA Connectivity: Up. LCP Connectivity remains unavailable until after you have made the fabric node into a transport node.

Results

As a result of adding a host to the NSX-T fabric, a collection of NSX-T modules are installed on the host. On ESXi, the modules are packaged as VIBs. For KVM on RHEL, they are packaged as RPMs. For KVM on Ubuntu, they are packaged as DEBs.

To verify on ESXi, you can run the esxcli software vib list | grep nsx command, where the date is the day that you performed the installation.

To verify on RHEL, run the yum list installed or rpm -qa command.

To verify on Ubuntu, run the dpkg --get-selections command.

You can view the fabric nodes with the GET https://<nsx-mgr>/api/v1/fabric/nodes/<node-id> API call:

{
"resource_type" : "HostNode",
"id" : "69b2a1d3-778d-4835-83c5-94cee99a213e",
"display_name" : "10.143.1.218",
"fqdn" : "w1-mvpcloud-218.eng.vmware.com",
"ip_addresses" : [ "10.143.1.218" ],
"external_id" : "69b2a1d3-778d-4835-83c5-94cee99a213e",
"discovered_ip_addresses" : [ "10.143.1.218" ],
"os_type" : "ESXI",
"os_version" : "6.5.0",
"managed_by_server" : "",
"_create_user" : "admin",
"_create_time" : 1498155416694,
"_last_modified_user" : "admin",
"_last_modified_time" : 1498155416694,
"_protection" : "NOT_PROTECTED",
"_revision" : 0
}

You can monitor the status in the API with the GET https://<nsx-mgr>/api/v1/fabric/nodes/<node-id>/status API call.

{
  "lcp_connectivity_status" : "UP",
  "mpa_connectivity_status" : "UP",
  "last_sync_time" : 1480370899198,
  "mpa_connectivity_status_details" : "Client is responding to heartbeats",
  "lcp_connectivity_status_details" : [ {
    "control_node_ip" : "10.143.1.47",
    "status" : "UP"
  } ],
  "inventory_sync_paused" : false,
  "last_heartbeat_timestamp" : 1480369333415,
  "system_status" : {
    "mem_used" : 2577732,
    "system_time" : 1480370897000,
    "file_systems" : [ {
      "file_system" : "root",
      "total" : 32768,
      "used" : 5440,
      "type" : "ramdisk",
      "mount" : "/"
    }, {
      "file_system" : "etc",
      "total" : 28672,
      "used" : 264,
      "type" : "ramdisk",
      "mount" : "/etc"
    }, {
      "file_system" : "opt",
      "total" : 32768,
      "used" : 20,
      "type" : "ramdisk",
      "mount" : "/opt"
    }, {
      "file_system" : "var",
      "total" : 49152,
      "used" : 2812,
      "type" : "ramdisk",
      "mount" : "/var"
    }, {
      "file_system" : "tmp",
      "total" : 262144,
      "used" : 21728,
      "type" : "ramdisk",
      "mount" : "/tmp"
    }, {
      "file_system" : "iofilters",
      "total" : 32768,
      "used" : 0,
      "type" : "ramdisk",
      "mount" : "/var/run/iofilters"
    }, {
      "file_system" : "hostdstats",
      "total" : 116736,
      "used" : 2024,
      "type" : "ramdisk",
      "mount" : "/var/lib/vmware/hostd/stats"
    } ],
    "load_average" : [ 0.03999999910593033, 0.03999999910593033, 0.05000000074505806 ],
    "swap_total" : 0,
    "mem_cache" : 0,
    "cpu_cores" : 2,
    "source" : "cached",
    "mem_total" : 8386740,
    "swap_used" : 0,
    "uptime" : 3983605000
  },
  "software_version" : "2.0.0.0.0.4649755",
  "host_node_deployment_status" : "INSTALL_SUCCESSFUL"
}

What to do next

If you have a large number of hypervisors (for example, 500 or more), NSX Manager might experience high CPU usage and performance problems. You can avoid the problem by running the script aggsvc_change_intervals.py, which is located in the NSX file store. (You can use the NSX CLI command copy file or the API POST /api/v1/node/file-store/<file-name>?action=copy_to_remote_file to copy the script to a host.) This script changes the polling intervals of certain processes. Run the script as follows:

python aggsvc_change_intervals.py -m '<NSX Manager IP address>' -u 'admin' -p '<password>' -i 900

To change the polling intervals back to their default values:

python aggsvc_change_intervals.py -m '<NSX Manager IP address>' -u 'admin' -p '<password>' -r

Create a transport zone. See About Transport Zones.