Forming an NSX Manager or Global Manager cluster provides high availability and reliability. Deploying nodes using the UI is supported only on ESXi hosts managed by vCenter Server.

For other environments, see Form an NSX Manager Cluster Using the CLI.

When you deploy a new node from the UI, the node connects to the first deployed node to form a cluster. All the repository details and the password of the first deployed node are synchronized with the newly deployed node. The first node is known as the orchestrator node because it contains the original copy of the VIBs and installation files required to prepare hosts of the cluster. The orchestrator node also help identify the node on which the Upgrade-Coordinator is running. When new nodes are added to the cluster, NSX-T Data Center uses the repository IP to synchronize the repository of VIBs and installation files on the new nodes of the cluster.

To create an NSX Manager cluster, deploy two additional nodes to form a cluster of three nodes total.

To create a Global Manager cluster, deploy two additional nodes to form a cluster of three nodes total. However, if your Global Manager has NSX-T Data Center 3.0.0 installed, deploy only one node, and do not form a cluster. See Install the Active and Standby Global Manager.

Prerequisites

  • Verify that an NSX Manager or Global Manager node is installed. See Install NSX Manager and Available Appliances.
  • Verify that compute manager is configured. See Add a Compute Manager.
  • Verify that the system requirements are met. See System Requirements.
  • Verify that the required ports are open. See Ports and Protocols.
  • Verify that a datastore is configured and accessible on the ESXi host.
  • Verify that you have the IP address and gateway, DNS server IP addresses, domain search list, and the NTP Server IP or FQDN list for the NSX Manager or Cloud Service Manager to use.
  • If you do not already have one, create the target VM port group network. Place the NSX-T Data Center appliances on a management VM network.

    If you have multiple management networks, you can add static routes to the other networks from the NSX-T Data Center appliance.

Procedure

  1. From a browser, log in with admin privileges to the NSX Manager or Global Manager at https://<manager-ip-address>.
  2. Deploy an appliance.
    • From NSX Manager, select System > Appliances > Add NSX Appliance.
    • From Global Manager, select System > Global Manager Appliances > Add NSX Appliance.
  3. Enter the appliance information details.
    Option Description
    Host Name or FQDN Enter a name for the node.
    Management IP/Netmask Enter an IP address to be assigned to the node.
    Management Gateway Enter a gateway IP address to be used by the node.
    DNS Servers Enter DNS server IP addresses to be used by the node.
    NTP Server Enter an NTP server IP address to be used by the node.
    Node Size Select the form factor to deploy the node from the following options:
    • Small (4 vCPU, 16 GB RAM, 300 GB storage)
    • Medium (6 vCPU, 24 GB RAM, 300 GB storage)
    • Large (12 vCPU, 48 GB RAM, 300 GB storage)
    For Global Manager select size:
    • Medium GM appliance for deployments up to four locations and 128 hypervisors across all locations
    • Large GM appliance for deployments with higher scale
    Do not use Small GM appliance for scale deployment.
  4. Enter the configuration details.
    Option Description
    Compute Manager Select the vCenter Server to provision compute resources for deploying the node.
    Compute Cluster Select the cluster the node is going to join.
    Resource Pool Select either a resource pool or a host for the node from the drop-down menu.
    Host If you did not select a resource pool, select a host for the node.
    Datastore Select a datastore for the node files from the drop-down menu.
    Virtual Disk Format
    • For NFS datastores, select a virtual disk format from the available provisioned policies on the underlying datastore.
      • With hardware acceleration, Thin Provision, Thick Provision Lazy Zeroed, and Thick Provision Eager Zeroed formats are supported.
      • Without hardware acceleration, only Thin Provision format is supported.
    • For VMFS datastores, Thin Provision, Thick Provision Lazy Zeroed, and Thick Provision Eager Zeroed formats are supported.
    • For vSAN datastores, you cannot select a virtual disk format because the VM storage policy defines the format.
      • The vSAN storage policies determine the disk format. The default virtual disk format for vSAN is Thin Provision. You can change the vSAN storage policies to set a percentage of the virtual disk that must be thick-provisioned.

    By default, the virtual disk for an NSX Manager or Global Manager node is prepared in the Thin Provision format.

    Note: You can provision each node with a different disk format based on which policies are provisioned on the datastore.
    Network Click Select Network to select the management network for the node.
  5. Enter the access and credentials details.
    Option Description
    Enable SSH Toggle the button to allow an SSH login to the new node.
    Enable Root Access Toggle the button to allow root access to the new node.
    System Root Credentials

    Set the root password and confirm the password for the new node.

    Your password must comply with the password strength restrictions.
    • At least 12 characters
    • At least one lower-case letter
    • At least one upper-case letter
    • At least one digit
    • At least one special character
    • At least five different characters
    • Default password complexity rules are enforced by the following Linux PAM module arguments:
      • retry=3: The maximum number of times a new password can be entered, for this argument at the most 3 times, before returning with an error.
      • minlen=12: The minimum acceptable size for the new password. In addition to the number of characters in the new password, credit (of +1 in length) is given for each different kind of character (other, upper, lower and digit).
      • difok=0: The minimum number of bytes that must be different in the new password. Indicates similarity between the old and new password. With a value 0 assigned to difok, there is no requirement for any byte of the old and new password to be different. An exact match is allowed.
      • lcredit=1: The maximum credit for having lower case letters in the new password. If you have less than or 1 lower case letter, each letter will count +1 towards meeting the current minlen value.
      • ucredit=1: The maximum credit for having upper case letters in the new password. If you have less than or 1 upper case letter each letter will count +1 towards meeting the current minlen value.
      • dcredit=1: The maximum credit for having digits in the new password. If you have less than or 1 digit, each digit will count +1 towards meeting the current minlen value.
      • ocredit=1: The maximum credit for having other characters in the new password. If you have less than or 1 other characters, each character will count +1 towards meeting the current minlen value.
      • enforce_for_root: The password is set for the root user.
      Note: For more details on Linux PAM module to check the password against dictionary words, refer to the man page.

      For example, avoid simple and systematic passwords such as VMware123!123 or VMware12345. Passwords that meet complexity standards are not simple and systematic but are a combination of letters, alpahabets, special characters, and numbers, such as VMware123!45, VMware 1!2345 or VMware@1az23x.

    Admin CLI Credentials and Audit CLI Credentials Select the Same as root password check box to use the same password that you configured for root, or deselect the check box and set a different password.
  6. Click Install Appliance.
    The new node is deployed. You can track the deployment process in the System > Appliances page for NSX Manager, the System > Global Manager Appliances for Global Manager, or the vCenter Server for either. Do not add additional nodes until the installation is finished and the cluster is stable.
  7. Wait for the deployment, cluster formation, and repository synchronization to finish.

    The joining and cluster stabilizing process might take from 10 to 15 minutes. Run get cluster status to view the status. Verify that the status for every cluster service group is UP before making any other cluster changes.

    Note:
    • If the first node reboots when the deployment of a new node is in progress, the new node might fail to register with the cluster. It displays the Failed to Register message on the new node's thumbnail. To redeploy the node manually on the cluster, delete and redeploy the node.
    • If a node deployment fails, you cannot reuse the same IP address to deploy another node until the failed node is deleted.
  8. After the node boots, log in to the CLI as admin and run the get interface eth0 command to verify that the IP address was applied as expected.
  9. Verify that your NSX Manager, Cloud Service Manager or Global Manager node has the required connectivity.
    Make sure that you can perform the following tasks.
    • Ping your node from another machine.
    • The node can ping its default gateway.
    • The node can ping the hypervisor hosts that are in the same network using the management interface.
    • The node can ping its DNS server and its NTP Server IP or FQDN list.
    • If you enabled SSH, make sure that you can SSH to your node.

    If connectivity is not established, make sure that the network adapter of the virtual appliance is in the proper network or VLAN.

  10. If your cluster has only two nodes, add another appliance.
    • From NSX Manager, select System > Appliances > Add NSX Appliance and repeat the configuration steps.
    • From Global Manager, select System > Global Manager Appliances > Add NSX Appliance and repeat the configuration steps.
  11. If the orchestrator node goes down or is unreachable and the repository is not replicated to the remaining nodes in the cluster, host preparation will fail. To successfully prepare nodes of the cluster, manually deploy the first node to seed the repository.

What to do next

Configure NSX Edge. See Install an NSX Edge on ESXi Using the vSphere GUI.