Forming an NSX Manager cluster provides high availability and reliability of NSX management function if one of the NSX Manager goes down.

For other environments, see Form an NSX Manager Cluster Using the CLI.

To create an NSX Manager cluster, deploy two additional nodes to form a cluster of three nodes total.
Note: Data is replicated to all the active NSX Manager nodes of the cluster. So, when the NSX Manager cluster is stable, every NSX Manager node contains the same data.

Prerequisites

  • Verify that an NSX Manager node is installed. See Install NSX Manager from vSphere Client.
  • Verify that compute manager is configured. See Add a Compute Manager.
  • Verify that the system requirements are met. See System Requirements.
  • Verify that the required ports are open. See Ports and Protocols.
  • Verify that a datastore is configured and accessible on the ESXi host.
  • Verify that you have the IP address and gateway, DNS server IP addresses, domain search list, and the NTP Server IP or FQDN for the NSX Manager to use.
  • Create a management VDS and target VM port group in vCenter. Place the NSX appliances onto this management VDS port group network. See Prepare a vSphere Distributed Switch for NSX.
    Multiple management networks can be used as long as the NSX Manager nodes has consistent connectivity and recommended latency between them.
    Note: If you plan to use Cluster VIP, all NSX Manager appliances should belong to same subnet.

Procedure

  1. From a browser, log in with admin privileges to an vCenter Server at https://<vcenter-server-ip-address>.
  2. On the vSphere Web Client UI, select vSphere Web Client menu and click NSX.
  3. Deploy an appliance. Go to System > Appliances > Add NSX Appliance.
  4. Enter the appliance information details.
    Option Description
    Host Name or FQDN Enter a name for the node.
    IP Type Select the IP type. The appliance can have IPv4 address only or both IPv4 and IPv6 addresses.
    Management IPv4/Netmask Enter an IPv4 address to be assigned to the node.
    Management Gateway IPv4 Enter a gateway IPv4 address to be used by the node.
    Management IPv6/Netmask Enter an IPv6 address to be assigned to the node. This option appears when IP Type is set to Both IPv4 and IPv6.
    Management Gateway IPv6 Enter a gateway IPv4 address to be used by the node. This option appears when IP Type is set to Both IPv4 and IPv6.
    DNS Servers Enter DNS server IP addresses to be used by the node.
    NTP Server Enter an NTP server IP address to be used by the node.
    Node Size Select the form factor to deploy the node from the following options:
    • Small (4 vCPU, 16 GB RAM, 300 GB storage)
    • Medium (6 vCPU, 24 GB RAM, 300 GB storage)
    • Large (12 vCPU, 48 GB RAM, 300 GB storage)
  5. Enter the configuration details.
    Option Description
    Compute Manager Select the VMware vCenter to provision compute resources for deploying the node.
    Compute Cluster Select the cluster the node is going to join.
    Resource Pool Select either a resource pool or a host for the node from the drop-down menu.
    Host If you did not select a resource pool, select a host for the node.
    Datastore Select a datastore for the node files from the drop-down menu.
    Virtual Disk Format
    • For NFS datastores, select a virtual disk format from the available provisioned policies on the underlying datastore.
      • With hardware acceleration, Thin Provision, Thick Provision Lazy Zeroed, and Thick Provision Eager Zeroed formats are supported.
      • Without hardware acceleration, only Thin Provision format is supported.
    • For VMFS datastores, Thin Provision, Thick Provision Lazy Zeroed, and Thick Provision Eager Zeroed formats are supported.
    • For vSAN datastores, you cannot select a virtual disk format because the VM storage policy defines the format.
      • The vSAN storage policies determine the disk format. The default virtual disk format for vSAN is Thin Provision. You can change the vSAN storage policies to set a percentage of the virtual disk that must be thick-provisioned.

    By default, the virtual disk for an NSX Manager node is prepared in the Thin Provision format.

    Note: You can provision each node with a different disk format based on which policies are provisioned on the datastore.
    Network Click Select Network to select the management network for the node.
  6. Enter the access and credentials details.
    Option Description
    Enable SSH Toggle the button to allow an SSH login to the new node.
    Enable Root Access Toggle the button to allow root access to the new node.
    System Root Credentials

    Set the root password and confirm the password for the new node.

    Your password must comply with the password strength restrictions.
    • At least 12 characters
    • At least one lower-case letter
    • At least one upper-case letter
    • At least one digit
    • At least one special character
    • At least five different characters
    • Default password complexity rules are enforced by the following Linux PAM module arguments:
      • retry=3: The maximum number of times a new password can be entered, for this argument at the most 3 times, before returning with an error.
      • minlen=12: The minimum acceptable size for the new password. In addition to the number of characters in the new password, credit (of +1 in length) is given for each different kind of character (other, upper, lower and digit).
      • difok=0: The minimum number of bytes that must be different in the new password. Indicates similarity between the old and new password. With a value 0 assigned to difok, there is no requirement for any byte of the old and new password to be different. An exact match is allowed.
      • lcredit=1: The maximum credit for having lower case letters in the new password. If you have less than or 1 lower case letter, each letter will count +1 towards meeting the current minlen value.
      • ucredit=1: The maximum credit for having upper case letters in the new password. If you have less than or 1 upper case letter each letter will count +1 towards meeting the current minlen value.
      • dcredit=1: The maximum credit for having digits in the new password. If you have less than or 1 digit, each digit will count +1 towards meeting the current minlen value.
      • ocredit=1: The maximum credit for having other characters in the new password. If you have less than or 1 other characters, each character will count +1 towards meeting the current minlen value.
      • enforce_for_root: The password is set for the root user.
      Note: For more details on Linux PAM module to check the password against dictionary words, refer to the man page.

      For example, avoid simple and systematic passwords such as VMware123!123 or VMware12345. Passwords that meet complexity standards are not simple and systematic but are a combination of letters, alphabets, special characters, and numbers, such as VMware123!45, VMware 1!2345 or VMware@1az23x.

    Admin CLI Credentials and Audit CLI Credentials Select the Same as root password check box to use the same password that you configured for root, or deselect the check box and set a different password.
  7. Click Install Appliance.
    The new node is deployed. You can track the NSX Manager deployment progress in the System > Appliances page ( NSX UI) in VMware vCenter. Do not add additional nodes until the installation is finished and the cluster is stable.
  8. Wait for the deployment, cluster formation, and repository synchronization to finish.
  9. Verify that installed NSX Manager node has the required connectivity.
    Make sure that you can perform the following tasks.
    • Ping your node from another machine.
    • The node can ping its default gateway.
    • The node can ping the hypervisor hosts that are in the same network using the management interface.
    • The node can ping its DNS server and its NTP Server IP or FQDN list.
    • If you enabled SSH, make sure that you can SSH to your node.

    If connectivity is not established, make sure that the network adapter of the virtual appliance is in the proper network or VLAN.

  10. If your cluster has only two nodes, add another appliance.
    • From NSX Manager, select System > Appliances > Add NSX Appliance and repeat the configuration steps.

Results

After the cluster is formed, VMware vCenter displays the IP address of all the three nodes on the NSX UI page.

What to do next

After the cluster is formed, you can choose to set a virtual IP address (VIP) for the cluster or you can choose to not have a VIP for the cluster. See Configure a Virtual IP Address for a Cluster. Even after you configure a VIP for the cluster, the NSX plugin in VMware vCenter continues to access NSX UI using the primary IP address of the current HTTPS leader node. During failover, the NSX plugin in VMware vCenter automatically starts using the primary IP address of the new leader node.