NSX Controller is an advanced distributed state management system that provides control plane functions for NSX logical switching and routing functions. It serves as the central control point for all logical switches within a network and maintains information about all hosts, logical switches (VXLANs), and distributed logical routers. Controllers are required if you are planning to deploy 1) distributed logical routers or 2) VXLAN in unicast or hybrid mode.

About this task

No matter the size of the NSX deployment, VMware requires that each NSX Controller cluster contain three controller nodes. Having a different number of controller nodes is not supported.

Prerequisites

  • Before deploying NSX Controllers, you must deploy an NSX Manager appliance and register vCenter with NSX Manager.

  • Determine the IP pool settings for your controller cluster, including the gateway and IP address range. DNS settings are optional. The NSX Controller IP network must have connectivity to the NSX Manager and to the management interfaces on the ESXi hosts.

Procedure

  1. In vCenter, navigate to Home > Networking & Security > Installation and select the Management tab.

    For example:

  2. In the NSX Controller nodes section, click the Add Node (add) icon.
  3. Enter the NSX Controller settings appropriate to your environment.

    NSX Controllers should be deployed to a vSphere Standard Switch or vSphere Distributed Switch port group which is not VXLAN based and has connectivity to the NSX Manager, other controllers, and to hosts via IPv4.

    For example:

  4. If you have not already configured an IP pool for your controller cluster, configure one now by clicking New IP Pool.

    Individual controllers can be in separate IP subnets, if necessary.

    For example:

  5. Type and re-type a password for the controller.
    Note:

    Password must not contain the username as a substring. Any character must not consecutively repeat 3 or more times.

    The password must be at least 12 characters and must follow 3 of the following 4 rules:

    • At least one upper case letter

    • At least one lower case letter

    • At least one number

    • At least one special character

  6. After the first controller is completely deployed, deploy two additional controllers.

    Having three controllers is mandatory. We recommend configuring a DRS anti-affinity rule to prevent the controllers from residing on the same host.

Results

When successfully deployed, the controllers have a Normal status and display a green check mark.

SSH to each controller and make sure they can ping the host management interface IP addresses. If the ping fails, make sure all controllers have the correct default gateway. To view a controller routing table, run the show network routes command. To change a controller's default gateway run the clear network routes command followed by the add network default-route <IP-address> command.

Run the following commands to verify the control cluster is behaving as expected.

  • show control-cluster status

    Type                Status                                       Since
    --------------------------------------------------------------------------------
    Join status:        Join complete                                05/04 02:36:03
    Majority status:    Connected to cluster majority                05/19 23:57:23
    Restart status:     This controller can be safely restarted      05/19 23:57:12
    Cluster ID:         ff3ebaeb-de68-4455-a3ca-4824e31863a8
    Node UUID:          ff3ebaeb-de68-4455-a3ca-4824e31863a8
    
    Role                Configured status   Active status
    --------------------------------------------------------------------------------
    api_provider        enabled             activated
    persistence_server  enabled             activated
    switch_manager      enabled             activated
    logical_manager     enabled             activated
    directory_server    enabled             activated
    

    For Join status, verify the controller node is reporting Join Complete.

    For Majority status, verify the controller is connected to the cluster majority.

    For Cluster ID, all the controller nodes in a cluster should have the same cluster ID.

    For Configured status and Active status, verify that the all the controller roles are enabled and activated.

  • show control-cluster roles

    
                              Listen-IP  Master?    Last-Changed  Count
    api_provider         Not configured      Yes  06/02 08:49:31      4
    persistence_server              N/A      Yes  06/02 08:49:31      4
    switch_manager            127.0.0.1      Yes  06/02 08:49:31      4
    logical_manager                 N/A      Yes  06/02 08:49:31      4
    directory_server                N/A      Yes  06/02 08:49:31      4
    
    

    One controller node will be the master for each role. In this example, a single node is the master for all roles.

    If a master NSX Controller instance for a role fails, the cluster elects a new master for that role from the available NSX Controller instances.

    NSX Controller instances are on the control plane, so an NSX Controller failure does not affect data plane traffic.

  • show control-cluster connections

    role                port            listening open conns
    --------------------------------------------------------
    api_provider        api/443         Y         2
    --------------------------------------------------------
    persistence_server  server/2878     Y         2
                        client/2888     Y         1
                        election/3888   Y         0
    --------------------------------------------------------
    switch_manager      ovsmgmt/6632    Y         0
                        openflow/6633   Y         0
    --------------------------------------------------------
    system              cluster/7777    Y         0
    
    

    This command shows the intra-cluster communication status.

    The controller cluster majority leader listens on port 2878 (as shown by the “Y” in the “listening” column). The other controller nodes will have a dash (-) in the “listening” column for Port 2878.

    All other ports should be listening on all three controller nodes.

    The open conns column shows the number of open connections that the controller node has with other controller nodes. In a 3-node controller cluster, the controller node should have no more than two open connections.

What to do next

Caution:

While a controller status is Deploying, do not add or modify logical switches or distributed routing in your environment. Also, do not continue to the host preparation procedure. After a new controller is added to the controller cluster, all controllers are inactive for a short while (no more than 5 minutes). During this downtime, any operation related to controllers---for example, host preparation---might have unexpected results. Even though host preparation might seem to complete successfully, the SSL certification might not establish correctly, thus causing issues in the VXLAN network.

If you need to delete a deployed controller, see Recover from an NSX Controller Failure in the NSX Administration Guide.

On the hosts where the NSX Controller nodes are first deployed, NSX enables automatic VM startup/shutdown. If the controller node VMs are later migrated to other hosts, the new hosts might not have automatic VM startup/shutdown enabled. For this reason, VMware recommends that you check all hosts in the cluster to make sure that automatic VM startup/shutdown is enabled. See http://pubs.vmware.com/vsphere-60/index.jsp?topic=%2Fcom.vmware.vsphere.vm_admin.doc%2FGUID-5FE08AC7-4486-438E-AF88-80D6C7928810.html.