Having a multi-node cluster of NSX Controllers helps ensure that at least one NSX Controller is always available.

Prerequisites

  • Install three NSX Controller appliances.

  • Make sure the NSX Controller nodes have joined the management plane. See Join NSX Controllers with the Management Plane.

  • Initialize the control cluster to create a control cluster master.

  • In the join control-cluster command, you must use an IP address, not a domain name.

  • If you are using vCenter and you are deploying NSX-T components to the same cluster, make sure to configure DRS anti-affinity rules. Anti-affinity rules prevent DRS from migrating more than one node to a single host.

Procedure

  1. Open an SSH session for each of your NSX Controller appliances.

    For example, NSX-Controller1, NSX-Controller2, and NSX-Controller3. In this example, NSX-Controller1 has already initialized the control cluster and is the control cluster master.

  2. On the non-master NSX Controllers, run the set control-cluster security-model command with a shared secret password. The shared-secret password entered for NSX-Controller2 and NSX-Controller3 must match the shared-secret password entered on NSX-Controller1.

    For example:

    NSX-Controller2> set control-cluster security-model shared-secret secret <NSX-Controller1’s-shared-secret-password>
    
    Security secret successfully set on the node.
    
    NSX-Controller3> set control-cluster security-model shared-secret secret <NSX-Controller1’s-shared-secret-password>
    
    Security secret successfully set on the node.
    
  3. On the non-master NSX Controllers, run the get control-cluster certificate thumbprint command.

    The command output is a string of numbers that is unique to each NSX Controller.

    For example:

    NSX-Controller2> get control-cluster certificate thumbprint
    ...
    
    NSX-Controller3> get control-cluster certificate thumbprint
    ...
    
  4. On the master NSX Controller, run the join control-cluster command.

    Provide the following information:

    • IP address with an optional port number of the non-master NSX Controllers (NSX-Controller2 and NSX-Controller3 in the example)

    • Certificate thumbprint of the non-master NSX Controllers

    Do not run the join commands on multiple controllers in parallel. Make sure the each join is complete before joining another controller.

    NSX-Controller1> join control-cluster <NSX-Controller2-IP> thumbprint <nsx-controller2's-thumbprint>
    Node 192.168.210.48 has successfully joined the control cluster.
    Please run 'activate control-cluster' command on the new node.
    

    Make sure that NSX-Controller2 has joined the cluster by running the get control-cluster status command.

    NSX-Controller1> join control-cluster <NSX-Controller3-IP> thumbprint <nsx-controller3's-thumbprint>
    Node 192.168.210.49 has successfully joined the control cluster.
    Please run 'activate control-cluster' command on the new node.
    

    Make sure that NSX-Controller3 has joined the cluster by running the get control-cluster status command.

  5. On the two NSX Controller nodes that have joined the control cluster master, run the activate control-cluster command.
    Note:

    Do not run the activate commands on multiple controllers in parallel. Make sure each activation is complete before activating another controller.

    For example:

    NSX-Controller2> activate control-cluster
    Control cluster activation successful.
    

    On NSX-Controller2, run the get control-cluster status verbose command, and make sure that the Zookeeper Server IP is reachable, ok.

    NSX-Controller3> activate control-cluster
    Control cluster activation successful.
    

    On NSX-Controller3, run the get control-cluster status verbose command, and make sure that the Zookeeper Server IP is reachable, ok.

Results

Verify the result by running the get control-cluster status command.

NSX-Controller1> get control-cluster status
uuid: db4aa77a-4397-4d65-ad33-9fde79ac3c5c
is master: true
in majority: true
uuid                                 address              status
0cfe232e-6c28-4fea-8aa4-b3518baef00d 192.168.210.47       active
bd257108-b94e-4e6d-8b19-7fa6c012961d 192.168.210.48       active
538be554-1240-40e4-8e94-1497e963a2aa 192.168.210.49       active

The first UUID listed is for the control cluster as a whole. Each controller node has a UUID as well.

Note:

If you try to join a controller to a cluster and the command set control-cluster security-model or join control-cluster fails, the cluster configuration files might be in an inconsistent state. To resolve the issue, perform the following steps:

  • On the controller that you try to join to the cluster, run the command deactivate control-cluster.

  • On the master controller, if the command get control-cluster status or get control-cluster status verbose displays information about the failed controller, run the command detach control-cluster <IP address of failed controller>.

What to do next

Add hypervisor hosts to the NSX-T fabric. See Host Preparation.