This section discusses about the NSX Advanced Load Balancer Controller interface and route management.

The NSX Advanced Load Balancer Controller has a single interface used for various control plane related tasks such as:

  • Operator access to the Controller through the CLI, UI, and API.

  • Communication between the Controller and the Service Engines.

  • Communication between the Controller and third-party entities for automation, observability, and more

  • Communication between the Controller and third-party Hardware Security Modules (HSMs).

A new interface is available on the Controller to allow the ability to isolate the communication for some of the above entities.

Further, any static routes to be added to the Controller interfaces must now leverage the cluster configuration instead of /etc/network/interfaces subsystem. These configurations will be persisted across the Controller reboot and upgrade.

Note:

This feature is supported only on the Controllers deployed in vCenter and enables the use of the new interface only for HSMs.

Using Labels for Classification

The following labels available for classification:

MGMT:

This signifies general management communication for the Controller access, and the Controller initiating communication, for instance, logging, third party API calls, and so on.

SE_SECURE_CHANNEL:

This label is used to classify secure communication between the Service Engine and the Controller.

HSM:

This is used to classify communication between the Controller and an HSM device.

With this classification, the traffic can be moved from the default, main interface to the new interface, if configured.

Note:
  • MGMT and SE_SECURE_CHANNEL can only be performed by the primary (eth0) interface.

  • HSM can be moved to the new interface.

Operating Model

Originally, the Controller was provisioned with one interface when being deployed in vCenter (during installation).

The following are the steps to add new interface:

  1. Shut down the Controller virtual machine and add the interface through vCenter UI.

  2. On powering ON the Controller virtual machine, NSX Advanced Load Balancer will recognize the new interface, and new configuration through the NSX Advanced Load Balancer CLI can be performed.

Note:

Hotplug of interfaces (addition to the virtual machine without powering off the virtual machine) is not supported.

For the interface to be recognized within the NSX Advanced Load Balancer Controller software and further classification through labels to be performed, NSX Advanced Load Balancer ‘cluster’ configuration model must be used.

Configuration for a Single Node Controller

To configure the new interface,

  1. Shut down the Controller and add the new interface through vCenter.

  2. Power on the Controller. The new interface will be visible as eth1, while the primary interface will always be visible as eth0 in the Cluster configuration:

[admin:controller]: > show cluster 
+-----------------+----------------------------------------------+ 
| Field           | Value                                        | 
+-----------------+----------------------------------------------+ 
| uuid            | cluster-83e1ebf5-2c63-4690-9aaf-b66e7a7b5f08 | 
| name            | cluster-0-1                                  | 
| nodes[1]        |                                              | 
|   name          | 10.102.64.201                                | 
|   ip            | 10.102.64.201                                | 
|   vm_uuid       | 00505681cb45                                 | 
|   vm_mor        | vm-16431                                     | 
|   vm_hostname   | node1.controller.local                       | 
|   interfaces[1] |                                              | 
|     if_name     | eth0                                         | 
|     mac_address | 00:50:56:81:cb:45                            | 
|     mode        | STATIC                                       | 
|     ip          | 10.102.64.201/22                             | 
|     gateway     | 10.102.67.254                                | 
|     labels[1]   | MGMT                                         | 
|     labels[2]   | SE_SECURE_CHANNEL                            | 
|     labels[3]   | HSM                                          | 
|   interfaces[2] |                                              | 
|     if_name     | eth1                                         | 
|     mac_address | 00:50:56:81:c0:89                            | 
+-----------------+----------------------------------------------+ 

In the above, the second interface (eth1) has been discovered.

Configure the mode and IP details on the new interface:

[admin:controller]: > configure cluster 
[admin:controller]: cluster> nodes index 1 

[admin:controller]: cluster:nodes> interfaces index 2 
[admin:controller]: cluster:nodes:interfaces> mode static 
[admin:controller]: cluster:nodes:interfaces> ip 100.64.218.90/24 
[admin:controller]: cluster:nodes:interfaces> labels HSM 
[admin:controller]: cluster:nodes:interfaces> save 
[admin:controller]: cluster:nodes> interfaces index 1 
[admin:controller]: cluster:nodes:interfaces> no labels HSM 
[admin:controller]: cluster:nodes:interfaces> save

In the CLI configuration shown above,

  • For the second interface (index 2), the IP and label have been added.

  • The label HSM has been removed from the primary interface (index 1).

Note:

The nodes that already are configured with additional interfaces and routes, can be added to a cluster.

Unconfiguring the New Interface for a Single Node Controller

The following are the steps to revert the configuration to use the primary interface:

  1. Remove the configuration (mode, IP, labels) from the second interface (eth1).

  2. Add the HSM label to the primary interface (eth0).

[admin:controller]: > configure cluster     
[admin:controller]: cluster> nodes index 1     
[admin:controller]: cluster:nodes> interfaces index 2     
[admin:controller]: cluster:nodes:interfaces> no mode     
[admin:controller]: cluster:nodes:interfaces> no ip     
[admin:controller]: cluster:nodes:interfaces> no labels HSM     
[admin:controller]: cluster:nodes:interfaces> save     
[admin:controller]: cluster:nodes> interfaces index 1     
[admin:controller]: cluster:nodes:interfaces> labels HSM     
[admin:controller]: cluster:nodes:interfaces> save     
[admin:controller]: cluster:nodes> save     
[admin:controller]: cluster> save 

Configuring a Static Route

A static route can be configured for the primary and secondary through the Cluster configuration.

Note:

Do not edit the /etc/network/interfaces file.

All configurations (IP, Static Route) must be done through the cluster configuration as shown below:

[admin:controller]: > configure cluster 
[admin:controller]: cluster> nodes index 1 
[admin:controller]: cluster:nodes> static_routes 
New object being created 
[admin:controller]: cluster:nodes:static_routes> prefix 1.1.1.0/24 
[admin:controller]: cluster:nodes:static_routes> next_hop 100.64.218.20 
[admin:controller]: cluster:nodes:static_routes> route_id 1 
[admin:controller]: cluster:nodes:static_routes> if_name eth1 
[admin:controller]: cluster:nodes:static_routes> save 
[admin:controller]: cluster:nodes> save 
[admin:controller]: cluster> where 
Tenant: admin 
Cloud: Default-Cloud 
+--------------------+----------------------------------------------+ 
| Field              | Value                                        | 
+--------------------+----------------------------------------------+ 
| uuid               | cluster-83e1ebf5-2c63-4690-9aaf-b66e7a7b5f08 | 
| name               | cluster-0-1                                  | 
| nodes[1]           |                                              | 
|   name             | 10.102.64.201                                | 
|   ip               | 10.102.64.201                                | 
|   vm_uuid          | 00505681cb45                                 | 
|   vm_mor           | vm-16431                                     | 
|   vm_hostname      | node1.controller.local                       | 
|   interfaces[1]    |                                              | 
|     if_name        | eth0                                         | 
|     mac_address    | 00:50:56:81:cb:45                            | 
|     mode           | STATIC                                       | 
|     ip             | 10.102.64.201/22                             | 
|     gateway        | 10.102.67.254                                | 
|     labels[1]      | MGMT                                         | 
|     labels[2]      | SE_SECURE_CHANNEL                            | 
|   interfaces[2]    |                                              | 
|     if_name        | eth1                                         | 
|     mac_address    | 00:50:56:81:c0:89                            | 
|     mode           | STATIC                                       | 
|     ip             | 100.64.218.90/24                             | 
|     labels[1]      | HSM                                          | 
|   static_routes[1] |                                              | 
|     prefix         | 1.1.1.0/24                                   | 
|     next_hop       | 100.64.218.20                                | 
|     if_name        | eth1                                         | 
|     route_id       | 1                                            | 
+--------------------+----------------------------------------------+
[admin:controller]: cluster> save 

Configuration for a 3-node Cluster

In the case of a 3-node Cluster, the following steps are required:

  • For the discovery of the secondary interface, the Controller nodes need to be stand-alone, and not part of a cluster. This is a one-time operation for NSX Advanced Load Balancer to discover the new interface.

  • Once the secondary interfaces have been discovered, the Leader node can be used to form the cluster, as detailed in Deploying an NSX Advanced Load Balancer Controller Cluster topic in the VMware NSX Advanced Load BalancerInstallation guide.

  • After the cluster is fully formed, the secondary interface configuration for all the nodes can be performed.

[admin:controller]: cluster> nodes index 1 
[admin:controller]: cluster:nodes> interfaces index 2 
[admin:controller]: cluster:nodes:interfaces> mode static 
[admin:controller]: cluster:nodes:interfaces> ip 100.64.218.90/24 
[admin:controller]: cluster:nodes:interfaces> labels HSM 
[admin:controller]: cluster:nodes:interfaces> save 
[admin:controller]: cluster:nodes> interfaces index 1 
[admin:controller]: cluster:nodes:interfaces> no labels HSM 
[admin:controller]: cluster:nodes:interfaces> save 
[admin:controller]: cluster:nodes> save 
[admin:controller]: cluster> nodes index 2 
[admin:controller]: cluster:nodes> interfaces index 2 
[admin:controller]: cluster:nodes:interfaces> mode static 
[admin:controller]: cluster:nodes:interfaces> ip 100.64.218.100/24 
[admin:controller]: cluster:nodes:interfaces> labels HSM 
[admin:controller]: cluster:nodes:interfaces> save 
[admin:controller]: cluster:nodes> interfaces index 1
[admin:controller]: cluster:nodes:interfaces> no labels HSM 
[admin:controller]: cluster:nodes:interfaces> save 
[admin:controller]: cluster:nodes> save 
[admin:controller]: cluster> nodes index 3 
[admin:controller]: cluster:nodes> interfaces index 2 
[admin:controller]: cluster:nodes:interfaces> mode static 
[admin:controller]: cluster:nodes:interfaces> ip 100.64.218.110/24 
[admin:controller]: cluster:nodes:interfaces> labels HSM 
[admin:controller]: cluster:nodes:interfaces> save 
[admin:controller]: cluster:nodes> interfaces index 1 
admin:controller]: cluster:nodes:interfaces> no labels HSM 
[admin:controller]: cluster:nodes:interfaces> save 
[admin:controller]: cluster:nodes> save 
[admin:controller]: cluster> save 
Note:
  • There is no requirement to log in to the node for the interface discovery to succeed. The only requirement is for the interface to be in a connected state in the virtual machine and for the Controller to have been powered on.

  • The cluster formation and the secondary interface configuration must be performed as separate steps.

Configuring IPv6 Addresses for Secondary Interface

In NSX Advanced Load Balancer, you can add mode6, ip6, and gateway6 instead of mode, IP, and gateway for the IPv6 interface. The interface configuration does not support dual-stack mode in 22.1.3. So, an interface can have either a V4 IP or a V6 IP, and not both.

The SE_SECURE_CHANNEL label can be moved to the secondary interface to enable communication to Service Engines. This secondary interface can be either of IPv4 or IPv6. This will help users to have different interfaces for management and Service Engine communication.

Sample configuration for IPv6 interface with SE_SECURE_CHANNEL label attached to IPv6 interface is as shown below:

+-----------------+----------------------------------------------+
| Field           | Value                                        |
+-----------------+----------------------------------------------+
| uuid            | cluster-f29ed7c8-0da5-4fb6-87f7-e792584643b3 |
| name            | cluster-0-1                                  |
| nodes[1]        |                                              |
|   name          | 100.65.9.203                                 |
|   ip            | 100.65.9.203                                 |
|   vm_uuid       | 000000675c79                                 |
|   vm_mor        | vm-22988                                     |
|   vm_hostname   | node1.controller.local                       |
|   interfaces[1] |                                              |
|     if_name     | eth0                                         |
|     mac_address | 00:00:00:67:5c:79                            |
|     mode        | STATIC                                       |
|     ip          | 100.65.9.203/20                              |
|     gateway     | 100.65.15.254                                |
|     labels[1]   | MGMT                                         |
|     labels[2]   | HSM                                          |
|   interfaces[2] |                                              |
|     if_name     | eth1                                         |
|     mac_address | 00:00:00:1a:ab:e8                            |
|     mode        | STATIC                                       |
|     ip          | 100.65.14.66/20                              |
|   interfaces[3] |                                              |
|     if_name     | eth2                                         |
|     mac_address | 00:00:00:3e:8b:ef                            |
|     labels[1]   | SE_SECURE_CHANNEL                            |
|     mode6       | STATIC                                       |
|     ip6         | 2402:740:0:40e::20:3/128                     |
+-----------------+----------------------------------------------+

Updating the Configuration Following the Controller IP Address Change

The management IP addresses of each Controller node must be static. This applies to single-node deployments and three-node deployments.

The cluster configuration and runtime configuration contain the IP information for the cluster. If the IP address of a leader or follower node changes (for instance, due to DHCP), this script must be run to update the IP information. The cluster will not function properly until the cluster configuration is updated.

If the IP address of a Controller node is changed for any reason (such as DHCP), the following script must be used to update the cluster configuration. This applies to single-node deployments and cluster deployments.

To repair the cluster configuration after IP address of a Controller node has changed, run the change_ip.py script.

The script is located in the /opt/avi/python/bin/cluster_mgr/change_ip.py directory.

Note:
  • The change IP script only changes the NSX Advanced Load Balancer cluster configuration. It does not change the IP address of the host or the virtual machine on which Controller services are running. For instance, it does not update the /etc/network/interfacesfile in a VMware-hosted Controller. You must change the IP address for the virtual machine in the vApp properties in VMware.

  • Special consideration is required when changing the IP addresses of Controllers in a bare-metal configuration.

Script Options

Note:

Before running the script, check to make sure new IPs are working on all nodes and are reachable across nodes. If one or more IPs are not accessible, the script makes a best-effort update, though there is no guarantee that the cluster will be back in sync upon restoring connectivity.

The script can be run on the Controller node whose management IP address changed, or on another Controller node within the same cluster.

The script must be run on one of the nodes that is in the cluster. If the script is run on a node that is not in the cluster, the script fails.

-i ipaddr: Specifies the new IP address of the node on which the script is run.

-o ipaddr: Specifies the IP address of another node in the cluster.

-m subnet-mask: If the subnet also changed, use this option to specify the new subnet. Specify the mask in 255.255.255.0. Format.

-g gateway-ipaddr: If the default gateway also changed, use this option to specify the new gateway.

Note:

The -m and -g options apply to all IP addresses in the cluster.

Updating IP Information for a Single-node Deployment

To update the Controller IP information for a single-node deployment, use a command string such as the following:

*change_ip.py -i **ipaddr*

This command is run on node 10.10.25.81. Since no other nodes are specified, this is assumed to be a single-node cluster (just this Controller).

username@avi:~$ | change_ip.py -i 10.10.25.81

In the following example, the node’s default gateway also has changed.

username@avi:~$ | change_ip.py -i 10.10.25.81 -g 10.10.10.1

Updating IP Information for a Controller Cluster

Note:

Before executing change_ip.py, ensure all new IPs are reachable from one another over SSH ports (22 for regular, 5098 for containers).

To update Controller IP information for a cluster, use a command string such as:

change_ip.py -i **ipaddr **-o ipaddr -o ipaddr

Example:

username@avi:~$ | change_ip.py -i 10.10.25.81 -o 10.10.25.82 -o 10.10.25.83

This command is run on node 10.10.25.81, which is a member of a 3-node cluster that also contains nodes 10.10.25.82 and 10.10.25.83.

The script can be run on any of the nodes in the cluster. The following example is run on node 10.10.25.82:

username@avi:~$ | change_ip.py -i 10.10.25.82 -o 10.10.25.81 -o 10.10.25.83
Note:

After executing change_ip.py, in case of failure, use recover.py to convert nodes to single nodes and create the 3-node cluster again. For more information, see Recover a Non-Operational Controller Cluster in VMware NSX Advanced Load Balancer Administration Guide.

To verify if the system is functioning properly, go to the Controller Nodes page and ensure that all nodes are CLUSTER_ACTIVE.

Steps to Change Controller IPs on Nutanix Cluster

The following are the steps to change the Controller IPs on Nutanix cluster:

  1. Change the IP address of each Controller node within the cluster to the new IP by manually editing the network scripts on the host and changing the interface configuration.

  2. For instance, /etc/network/interfaces/ file on the Controller virtual machine must be modified as follows (if using static IP):

    auto lo
    iface lo inet loopback
    
    auto eth0
    iface eth0 inet static
     address <ipv4 address>
     netmask 24
     gateway <ipv4 gw>
  3. Ensure that the new Controller IP addresses are reachable in the network from the other Controller nodes.

  4. Run /opt/avi/python/bin/cluster_mgr/change_ip.py script on the Controller to reflect the above IP address change.

  5. Reboot the Controller.

For a 3 node cluster deployment, you need to change the IPs on all the Controllers and then run the command as shown below from any Controller node to update the Controller IP information for a cluster.

username@avi:~$ change_ip.py -i **ipaddr **-o ipaddr -o ipaddr

where,

  • -i ipaddr: Specifies the new IP address of the node on which the script is run.

  • -o ipaddr: Specifies the IP address of another node in the cluster.

  • -m subnet-mask: If the subnet is also changed, use this option to specify the new subnet. Specify the mask in the following format: 255.255.255.0

  • -g gateway-ipaddr: If the default gateway is also changed, use this option to specify the new gateway.

    Note:

    The Controller cluster must come back up with the new IPs.

Considerations

Note the following considerations:

  • The interface names, eth0, eth1, and so on, and discovered MAC addresses are static, and cannot be modified.

  • The primary (eth0) interface cannot be modified, apart from the labels.

  • The default gateway cannot be configured for the new interfaces.

  • All labels needs to be a part of some interface and a label cannot be repeated in more than one interface.

  • For the new interface, only Static IP mode is supported. DHCP is not supported.

  • The Access Controls are applied only to the primary interface. Continue to use external firewall settings to restrict access, for instance, inbound SSH to the new interface.

  • Do not edit the /etc/network/interfaces file. All configurations, such as IP, Static Route, must be through the cluster configuration.

  • The secondary interfaces must remain in connected state within the virtual machine. Disconnecting them may lead to the interface being removed, if the virtual machine is rebooted.