In case, an unlikely disaster destroys the NSX Advanced Load Balancer Controller (or entire cluster), the device or VM hosting the Controllers must first be restored to factory default using flushdb.sh. Failure to do so can prevent the Controller from coming up. If there is a prev partition, rename or delete the prev partition. The prev partition can either be root1 or root2mv root1/root2 prev_back.

Steps to check the Partition Mapping are listed below:

  1. sudo cat/proc/cmdline

    You can observe an output with either root1 or root2 as a current partition.

    For example, we see root1 below as current partition.

    root=UUID=f4a947e1-7efb-4345-9eac-1ff680fc50e0 subroot=/root1 net.ifnames=0 biosdevname=0 console=tty0 console=ttyS0,115200n8
  2. Go to /host directory and rename the prev partition as shown below.

    cd /host
    ls -lrth <- This is to see if you have root1 and root2 directories.
    mv root2 prev_bak ----> as root2 is prev partition

Thereafter, the following script can be used to automate the configuration recovery process.

/opt/avi/scripts/restore_config.py

Note:

Prev partition must be removed.

This script imports the backup configuration onto the NSX Advanced Load Balancer Controller. If restoring a Controller cluster, this script restores the configuration and re-adds the other two nodes to the cluster.

  1. Create three new Controllers with the same IP address as the original cluster members. (NSX Advanced Load Balancer currently supports only static IP addresses). At this point, other than having an IP address, each Controller node must be in its factory default state.

  2. Log in to one of the NSX Advanced Load Balancer Controller nodes using SSH or SCP. Use the default administrator credentials.

  3. Run the restore command or script:

    • Copy backup file using SCP.

      scp /var/backup/avi_config.json admin@<controller-ip>://tmp/avi_config.json

  4. Run the following restore command locally using SSH.

    /opt/avi/scripts/restore_config.py --config CONFIG --passphrase PASSPHRASE --do_not_form_cluster DO_NOT_FORM_CLUSTER --flushdb --vip VIP --followers FOLLOWER_IP [FOLLOWER_IP ...]

In the above command line:

  • CONFIG is the path of the configuration file.

  • PASSPHRASE is the export configuration passphrase.

  • DO_NOT_FORM_CLUSTER causes cluster formation to be skipped.

  • VIP is the virtual IP address of the Controller.

  • FOLLOWER_IP [FOLLOWER_IP ...] is a list of the IP addresses of the followers.

  • CLUSTER_UUID is the old cluster UUID to be restored.

Note:

Starting with NSX Advanced Load Balancer 22.1.3, the following options are no longer supported.

  1. --do_not_form_cluster

  2. --vip

  3. --followers

Follow the steps below to restore configuration in a cluster set-up.

  1. Restore the configuration on one of the nodes

  2. Reform the cluster by inviting the two new nodes to the cluster.

Restore Configuration for a Three-Node Cluster

For a three-node cluster, before running the script ensure the following conditions are met:

  • The follower nodes need to be in a factory default state

  • The two new Controllers need to be spawned (make sure not to make any changes to them).

Execute the following command to reset the nodes to factory default states.

sudo systemctl stop process-supervisor.service
rm /var/lib/avi/etc/flushdb.done
/opt/avi/scripts/flushdb.sh
sudo systemctl start process-supervisor.service

Optional arguments for the restore_config script:

  • vip <Virtual IP of controller> (Virtual IP of the Controller in case of a cluster)

  • followers <follower_IP_1> <follower_IP_2> (IP addresses of the follower nodes)

  • cluster_uuid <cluster_uuid> (Old Cluster UUID will be restored)

Run the below command with the above optional arguments to restore a 3-node cluster.

/opt/avi/scripts/restore_config.py --config  --flushdb --passphrase  --followers

If the followers argument is not added in the above command, the cluster can be formed manually from the UI, CLI, or API after a successful restore configuration. The other Controllers need to be in a factory default state to be accepted for cluster formation.

During a restore config, the SE package is re-signed, and existing SE packages are deleted.

If a restore config fails due to a configuration import error, the logs can be found in /opt/avi/log/portal-webapp.log.