This topic answers basic questions about Avi Load Balancer Controller Clusters.
How many nodes are supported in an Avi Load Balancer Controller cluster?
Avi Load Balancer Controller clusters can include either 1 (standalone mode) or 3 Controller nodes in a cluster.
How many nodes are needed for the Avi Load Balancer Controller cluster to be operational?
A three-node cluster requires a quorum (majority) of the nodes to be present for the cluster to be operational. It means at least two nodes must be up.
Can you explain how the three nodes in an Avi Load Balancer Controller are used during normal operation?
Among the three nodes, a leader node is elected and orchestrates the process startup or shutdown across all the active members of the cluster. Configuration and metrics databases are configured to be active on the leader node and placed in standby mode on the follower nodes.
Streaming replication is configured to synchronize the active database to the standby databases. Analytics functionality is shared among all the nodes of the cluster. A given virtual service streams the logs and metrics to one of the nodes of the cluster.
What happens if a follower node goes down?
This node is removed from the active member list. The work that was performed on this node is re-distributed among the remaining nodes in the Avi Load Balancer Controller cluster.
What happens if the leader node goes down?
One of the follower nodes is promoted as a leader. This triggers a warm restart of the processes among the remaining nodes in the cluster. The warm standby is required to promote the standby configuration and metrics databases from standby to active on the new leader.
Note:During warm restart, the Controller REST API is unavailable for a period of 2-3 minutes.
What happens if two nodes go down?
The entire cluster becomes non-operational until at least one of the nodes up. A quorum (two out of three) of the nodes must be up for the cluster to be operational.
Will there be a disruption to traffic during cluster convergence (single or multiple nodes go down and come back)?
When single or multiple nodes go down and come back, the cluster is non-operational, the SEs continue to forward traffic for the configured virtual services. This is referred to as headless mode.
The analytics (logs and metrics) are buffered on the SEs. When the Controller cluster is once again operational, the cluster re-synchronizes with the SEs and initiates the collection of metrics and logs. Data plane traffic continues to flow normally throughout this time.
Recover a non-operational cluster where two of the three nodes are permanently down and not recoverable
For detailed information, see Recover a Non-Operational Controller Cluster.
Recover the system if all three Avi Load Balancer Controller nodes are permanently down and not recoverable.
For detailed information, see Backup and Restore of NSX Advanced Load Balancer Configuration.
Re-configure the Controller Cluster membership.
For detailed information, see Changing NSX Advanced Load Balancer Controller Cluster Configuration.
Can the Avi Load Balancer Controller nodes in a cluster have different resource allocations (memory, CPU, and drive space)?
It is recommended to allocate these same resources to each of the three nodes within the Controller cluster. In course of time, if there is a need to re-size (change resource allocations) the Avi Load Balancer Controller in the cluster, to accommodate growth, change resource allocations on one Avi Load Balancer Controller node at a time. However, it is expected that all nodes within the cluster will eventually be resized to the same allocations.
Can the Avi Load Balancer Controllers in a cluster be in different subnets?
Yes. This configuration is supported and can even be preferred for certain topologies for improved fault tolerance. However, a limitation with placing the Controllers in separate subnets is that the cluster IP address is not supported. The cluster IP address requires all Controller nodes to be in the same subnet.
Can the cluster leader be changed manually?
No. Currently, this is not a supported operation. The leader is chosen during initial deployment of the cluster or when recovering a fully down cluster. However, the leader cannot be manually changed while the cluster is operational.
Avi Load Balancer Controller Controllers participate in election of their leader, and automatically decides who the new leader must be in case of failure.
What timers are used during cluster failover? Are the cluster failover timers configurable?
- Leader failure
-
If the leader Controller of the cluster fails, this triggers an internal warm restart of the Controller processes. Typically, this takes around 2-3 minutes after it is detected that the leader has failed.
- Graceful reboot of leader
-
Failure detection is almost instantaneous in cases of a graceful reboot.
- Ungraceful power-off of leader
-
In the case of an ungraceful power-off of the leader, it can take up to 30 seconds to detect that the leader has failed.
These timers are tuned based on testing but are not configurable.
Can configuration changes be made directly on follower Controllers?
Yes. Nearly any configuration changes can be made on any of the Controller nodes.
Exceptions:
Configuration of the cluster itself (node IP addresses and cluster address).
Upgrade, which is performed only on the leader node.
What is the recommended cluster deployment option for multiple data centers in different regions?
Avi Load Balancer recommends deploying the Controller Cluster only within a single region. However, the member nodes are deployed across multiple availability zones within the region. If there are multiple regions, it is recommended to deploy one Controller Cluster per region. (For more information, see Clustering NSX Advanced Load Balancer Controllers of Different Networks).
How to import and export Avi Load Balancer metrics database?
Prior to upgrading to version 22.1.2, it is recommended to export the metrics database. To export the metrics database, copy the /var/lib/postgresql/10/pg_metrics/ directory and store it outside the Controller VM. In case you need to roll back to a previous release, the exported metrics can be imported. Thus, preventing the loss of data.
To import the metrics database,
Stop the process supervisor by running the following command first on the follower nodes and then on the leader.
systemctl stop process-supervisor
Move the copied pg_metrics directory to /var/lib/postgresql/10/pg_metrics/ on the leader.
Run the following to first start the process supervisor on leader and then on follower so that the imported pg_metrics data is replicated from leader to the followers.
python3 /opt/avi/scripts/pg_rec_fix_cluster.py --nodelist leader_ip follower_1_ip follower_2_ip
Note:It will take some time for the cluster to become HA_ACTIVE based on the metrics database size.