VMware Avi Load Balancer introduces an improved sytem restore capability in 30.2.1 version. The new restore capability provides visibility over the progress of the restore process.
System restore now runs a series of pre-checks to ensure that the new Controller is compatible with the Avi Load Balancer configuration being restored. The restore script used in the previous versions of Avi Load Balancer Controller has been replaced with the CLI/API method. The earlier restore process has been discontinued.
Restore should be done with a single Controller and then form 3 node cluster (if needed).
Make sure to use the configuration file that was exported with “full_system” option.
SEs to connect back to Controllers post disaster recovery operation, make sure one of the controller IPs matches with the previous controller IPs (before).
During a disaster recovery activity, the Cloud Services-registered Controller needs to be deregistered first, and then importing configuration files would work.
This restore process consists of importing of the backup configuration onto the Avi Load Balancer Controller.
Before you start system restore process, make sure the following files are available on Avi Load Balancer Controller:
Active Images in the System (Controller and SE Groups).
Follow the steps mentioned in Upgrading Avi Controller to upload the System Images.
Active Patches in the System (Controller and SeGroups).
Follow the steps mentioned in Patch Upgrades for Avi Controller to upload the Patch Images.
Identifying Images and Files required for Controller Restore:
Perform prechecks_only operation for Controller restore. It is safe to perform this operation at any time, it only runs the pre-checks and does not start restore operation.
Run the following command in CLI:
restore configuration file <path/to/avi_config.json> prechecks_only
Use the show upgrade status comamnd to monitor the pre-check status and use show upgrade status detail filter pre_check_status to view the pre-check warnings & errors.
Uploading System Images and Patch Images are mandatory to perform Controller Restore operation.
Uploading of file Objects are optional. User may choose to upload them after restore if required.
Re-upload GSLB Geo-DB files after restore operation is completed in case of GSLB deployment.
Follow the steps below to start the restore process:
Copy backup file using SCP. Use scp /var/backup/avi_config.json admin@<controller-ip>://tmp/avi_config.json command to copy files to the Controller.
Log in to the Avi Load Balancer Controller CLI using the administrator credentials.
Run the following restore command using CLI. This passphrase is the same password that was used while creating the backup file.
restore configuration <path/to/file> passphrase <passphrase>
[admin:ctrl-3node1]: > restore configuration file /home/admin/config.json Please enter the passphrase for the configuration: +-------------+------------------------------------------------------------------------------------+ | Field | Value | +-------------+------------------------------------------------------------------------------------+ | checks | | | | 'Checking Controller Cluster readiness for Restore operations.' | | | 'Check if upgrade operation is already in progress.' | | | 'Checking ServiceEngineGroup has an ongoing upgrade operation.' | | | 'Checking if backup config version can be restored on current controller version.' | | | 'Checking if backup config version has FIPS enabled.' | | | 'Checking if all images objects from the config are present on the controller.' | | | 'Checking if the configuation is valid.' | | | 'Checking if FIPS mode of backup configuration against controller environment.' | | | 'Checking Controller Cluster disk space for Restore operations.' | | | 'Checking if active versions post restore exceeds max allowed active versions.' | | | 'Checking if system package exists in disk.' | | | 'Checking if migration across versions are required for restore operation.' | | | 'Checking user consent prior to restore operations.' | | | 'Checking if all file objects from the config are present on the controller.' | | | 'Checking if the Service Engines are attached to the controller.' | | | 'Checking the system configuration.' | | | 'Checking for the patch applied as part of restore.' | | status | Checks preview for the requested operation. | | status_code | SYSERR_UPGRADE_OPS_PREVIEW_RESPONSE | +-------------+------------------------------------------------------------------------------------+ +--------+---------------------------------------------------------------------------------+ | Field | Value | +--------+---------------------------------------------------------------------------------+ | status | 'Restore of Controller started. Use 'show upgrade status' to check the status.' | +--------+---------------------------------------------------------------------------------+
Check the status of restore step using the show upgrade status command as shown below.
[admin:ctrl-3node1]: > show upgrade status +---------------+--------+---------------+--------------------------------------+--------------+-----------------------------+-------+--------+ | Name | Tenant | Cloud | State | Operation | Image | Patch | Reason | +---------------+--------+---------------+--------------------------------------+--------------+-----------------------------+-------+--------+ | cluster-0-1 | admin | - | UPGRADE_PRE_CHECK_IN_PROGRESS :(95)% | EVAL_RESTORE | 30.2.1-9044-20240404.235217 | - | - | | Default-Group | admin | Default-Cloud | UPGRADE_FSM_INIT | None | 30.2.1-9044-20240404.235217 | - | - | | 100.65.9.248 | admin | Default-Cloud | UPGRADE_FSM_INIT | None | 30.2.1-9044-20240404.235217 | - | - | | 100.65.9.249 | admin | Default-Cloud | UPGRADE_FSM_INIT | None | 30.2.1-9044-20240404.235217 | - | - | +---------------+--------+---------------+--------------------------------------+--------------+-----------------------------+-------+--------+ [admin:ctrl-3node1]: > show upgrade status +---------------+--------+---------------+-------------------------+--------------+-----------------------------+-------+----------------------------------------------------------+ | Name | Tenant | Cloud | State | Operation | Image | Patch | Reason | +---------------+--------+---------------+-------------------------+--------------+-----------------------------+-------+----------------------------------------------------------+ | cluster-0-1 | admin | - | UPGRADE_PRE_CHECK_ERROR | EVAL_RESTORE | 30.2.1-9044-20240404.235217 | - | Use 'show upgrade status detail filter pre_check_status' | | Default-Group | admin | Default-Cloud | UPGRADE_FSM_INIT | None | 30.2.1-9044-20240404.235217 | - | - | | 100.65.9.248 | admin | Default-Cloud | UPGRADE_FSM_INIT | None | 30.2.1-9044-20240404.235217 | - | - | | 100.65.9.249 | admin | Default-Cloud | UPGRADE_FSM_INIT | None | 30.2.1-9044-20240404.235217 | - | - | +---------------+--------+---------------+-------------------------+--------------+-----------------------------+-------+----------------------------------------------------------+ [admin:ctrl-3node1]: > show upgrade status detail filter pre_check_status +--------------------+----------------------------------------------------------------------------------+ | Field | Value | +--------------------+----------------------------------------------------------------------------------+ | uuid | cluster-d5d2c6a2-0437-49b8-9131-5599d139106f | | name | cluster-0-1 | | node_type | NODE_CONTROLLER_CLUSTER | | upgrade_ops | UPGRADE | | upgrade_readiness | | | checks[1] | | | check_code | SYSERR_MC_CONFIG_IMAGES_ERR | | description | Required images are missing in the controller. | | details[1] | Image(s): 30.2.1-9044-20240404.235217 are missing on the controller. Please uplo | | | ad them. | | state | UPGRADE_PRE_CHECK_ERROR | | start_time | 2024-04-05 04:48:34 | | end_time | 2024-04-05 04:48:34 | | duration | 0 sec | | checks[2] | | | check_code | SYSERR_MC_CONTROLLER_PACKAGE_ERR | | description | 'System package does not exist.' | | details[1] | Package image://30.2.1-9044-20240404.235217/controller.pkg does not exist in [no | | | de1.controller.local] | | state | UPGRADE_PRE_CHECK_ERROR | | start_time | 2024-04-05 04:48:34 | | end_time | 2024-04-05 04:48:34 | | duration | 0 sec | | checks[3] | | | check_code | SYSERR_MC_CONSENT_ERR | | description | Get User's consent prior to restore operations. | | details[1] | Restoring config will duplicate the environment of the config. If the environmen | | | t is active elsewhere there will be conflicts. | | state | UPGRADE_PRE_CHECK_WARNING | | start_time | 2024-04-05 04:48:34 | | end_time | 2024-04-05 04:48:34 | | duration | 0 sec | | checks[4] | | | check_code | SYSERR_CHECK_CONFIG_SE_ERR | | description | Service Engines are attached to the controller | | details[1] | The service engines(se-005056afbb3a, se-005056af170a) attached to the controller | | | might loose the connection post restore. | | state | UPGRADE_PRE_CHECK_WARNING | | start_time | 2024-04-05 04:48:34 | | end_time | 2024-04-05 04:48:34 | | duration | 0 sec | +--------------------+----------------------------------------------------------------------------------+
In the below example, the pre checks for restore is failed with a status code. Check the reason and take respective actions. In the below example, the restore is again resume after uploading image 30.2.1-9044-20240404.235217.
[admin:ctrl-3node1]: > restore configuration file /home/admin/config.json skip_warnings Please enter the passphrase for the configuration: +-------------+------------------------------------------------------------------------------------+ | Field | Value | +-------------+------------------------------------------------------------------------------------+ | checks | | | | 'Checking Controller Cluster readiness for Restore operations.' | | | 'Check if upgrade operation is already in progress.' | | | 'Checking ServiceEngineGroup has an ongoing upgrade operation.' | | | 'Checking if backup config version can be restored on current controller version.' | | | 'Checking if backup config version has FIPS enabled.' | | | 'Checking if all images objects from the config are present on the controller.' | | | 'Checking if the configuation is valid.' | | | 'Checking if FIPS mode of backup configuration against controller environment.' | | | 'Checking Controller Cluster disk space for Restore operations.' | | | 'Checking if active versions post restore exceeds max allowed active versions.' | | | 'Checking if system package exists in disk.' | | | 'Checking if migration across versions are required for restore operation.' | | | 'Checking user consent prior to restore operations.' | | | 'Checking if all file objects from the config are present on the controller.' | | | 'Checking if the Service Engines are attached to the controller.' | | | 'Checking the system configuration.' | | | 'Checking for the patch applied as part of restore.' | | status | Checks preview for the requested operation. | | status_code | SYSERR_UPGRADE_OPS_PREVIEW_RESPONSE | +-------------+------------------------------------------------------------------------------------+ +--------+---------------------------------------------------------------------------------+ | Field | Value | +--------+---------------------------------------------------------------------------------+ | status | 'Restore of Controller started. Use 'show upgrade status' to check the status.' | +--------+---------------------------------------------------------------------------------+ [admin:ctrl-3node1]: > show upgrade status +---------------+--------+---------------+----------------------------+-----------+-----------------------------+-------+--------+ | Name | Tenant | Cloud | State | Operation | Image | Patch | Reason | +---------------+--------+---------------+----------------------------+-----------+-----------------------------+-------+--------+ | cluster-0-1 | admin | - | UPGRADE_FSM_WAITING : (0)% | RESTORE | 30.2.1-9044-20240404.235217 | - | - | | Default-Group | admin | Default-Cloud | UPGRADE_FSM_INIT | None | 30.2.1-9044-20240404.235217 | - | - | | 100.65.9.248 | admin | Default-Cloud | UPGRADE_FSM_INIT | None | 30.2.1-9044-20240404.235217 | - | - | | 100.65.9.249 | admin | Default-Cloud | UPGRADE_FSM_INIT | None | 30.2.1-9044-20240404.235217 | - | - | +---------------+--------+---------------+----------------------------+-----------+-----------------------------+-------+--------+
Use theshow upgrade status command and the show upgrade status detail filter controller command to check the restore progress.
[admin:ctrl-3node1]: > show upgrade status +---------------+--------+---------------+-----------------------+-----------+-----------------------------+-------+--------+ | Name | Tenant | Cloud | State | Operation | Image | Patch | Reason | +---------------+--------+---------------+-----------------------+-----------+-----------------------------+-------+--------+ | cluster-0-1 | admin | - | UPGRADE_FSM_COMPLETED | RESTORE | 30.2.1-9044-20240404.235217 | - | - | | Default-Group | admin | Default-Cloud | UPGRADE_FSM_INIT | None | 30.2.1-9044-20240404.235217 | - | - | | 100.65.9.249 | admin | Default-Cloud | UPGRADE_FSM_INIT | None | 30.2.1-9044-20240404.235217 | - | - | | 100.65.9.248 | admin | Default-Cloud | UPGRADE_FSM_INIT | None | 30.2.1-9044-20240404.235217 | - | - | +---------------+--------+---------------+-----------------------+-----------+-----------------------------+-------+--------+
[admin:ctrl-3node1]: > show upgrade status detail filter controller +-----------------------+----------------------------------------------------------------------------------+ | Field | Value | +-----------------------+----------------------------------------------------------------------------------+ | uuid | cluster-08ea38c6-5cec-4cf3-96c9-a40e08cfd633 | | name | cluster-0-1 | | node_type | NODE_CONTROLLER_CLUSTER | | upgrade_ops | RESTORE | | version | 30.2.1-9044-20240404.235217 | | image_ref | 30.2.1-9044-20240404.235217 | | state | | | state | UPGRADE_FSM_COMPLETED | | last_changed_time | Fri Apr 5 07:13:19 2024 ms(781038) UTC | | rebooted | True | | upgrade_events[1] | | | task_name | GenerateUpgradeEvent_RESTORE_CONTROLLER_STARTED | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:00:42 | | end_time | 2024-04-05 07:00:42 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:00:42 2024]Generated Event RESTORE_CONTROLLER_STARTED. | | upgrade_events[2] | | | task_name | MarkUpgradeState_UPGRADE_FSM_IN_PROGRESS | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:00:42 | | end_time | 2024-04-05 07:00:42 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:00:42 2024]Marked upgrade request to UPGRADE_FSM_IN_PROGRESS. | | upgrade_events[3] | | | task_name | WaitForAllNodesBeforeReboot | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:00:42 | | end_time | 2024-04-05 07:00:42 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:00:42 2024]Joined with all followers. | | upgrade_events[4] | | | task_name | InitiateControllerShutdown | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:00:42 | | end_time | 2024-04-05 07:00:49 | | status | True | | message | Success | | duration | 7 sec | | sub_tasks[1] | [Fri 05 Apr 2024 07:00:45 AM UTC] Cluster manager request to shutdown controll | | | er services. | | upgrade_events[5] | | | task_name | InstallController | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:00:49 | | end_time | 2024-04-05 07:05:18 | | status | True | | message | Success | | duration | 269 sec | | sub_tasks[1] | [Fri 05 Apr 2024 07:00:51 AM UTC] process-supervisor is stopped.::UC | | | systcemtl status: inactive | | | UC:: [Fri 05 Apr 2024 07:03:07 AM UTC] Wait for postgresql to stop service statu | | | s:inactive ::UC | | | UC:: [Fri 05 Apr 2024 07:03:07 AM UTC] Starting postgresql.service on Leader.::U | | | C | | | UC:: [Fri 05 Apr 2024 07:03:08 AM UTC] Export configuration on Leader.::UC | | | UC:: [Fri 05 Apr 2024 07:03:08 AM UTC] Take Backup on Leader.::UC | | | 2024/04/05 07:03:08.209 [D] init global config instance failed. If y | | | ou donot use this, just ignore it. open conf/app.conf: no such file or director | | | y | | | 2024/04/05 07:03:12.200 [D] init global config instance failed. If y | | | ou donot use this, just ignore it. open conf/app.conf: no such file or director | | | y | | | UC:: [Fri 05 Apr 2024 07:03:19 AM UTC] VM image install.:30.2.1-9044-20240404.23 | | | 5217::UC | | | UC:: [Fri 05 Apr 2024 07:05:18 AM UTC] Patch to-be-applied from None. | | upgrade_events[6] | | | task_name | RestorePreRebootOps | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:05:18 | | end_time | 2024-04-05 07:05:18 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:05:18 2024] Syncing data from DB to config is completed. | | upgrade_events[7] | | | task_name | SwitchAndReboot | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:05:18 | | end_time | 2024-04-05 07:05:18 | | status | True | | message | Success | | duration | 0 sec | | upgrade_events[8] | | | task_name | WaitForAllNodesAfterReboot | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:06:35 | | end_time | 2024-04-05 07:06:35 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:06:35 2024]Joined with all followers. | | upgrade_events[9] | | | task_name | ReadPatchLogs | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:06:35 | | end_time | 2024-04-05 07:06:37 | | status | True | | message | Success | | duration | 2 sec | | sub_tasks[1] | No patch applied. | | upgrade_events[10] | | | task_name | RestartNGINX | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:06:37 | | end_time | 2024-04-05 07:06:39 | | status | True | | message | Success | | duration | 2 sec | | sub_tasks[1] | [Fri 05 Apr 2024 07:06:39 AM UTC] Restart NGINX service. | | upgrade_events[11] | | | task_name | MigrateConfig | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:06:39 | | end_time | 2024-04-05 07:10:04 | | status | True | | message | Success | | duration | 205 sec | | upgrade_events[12] | | | task_name | StartControllerOnSelf | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:10:04 | | end_time | 2024-04-05 07:10:04 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:10:04 2024]Started process-supervisor on self node. | | upgrade_events[13] | | | task_name | WaitUntilClusterReadyLocally | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:10:04 | | end_time | 2024-04-05 07:12:34 | | status | True | | message | Success | | duration | 150 sec | | sub_tasks[1] | [Fri Apr 5 07:10:04 2024]Waiting for cluster to be ready.::UC | | | UC::[Fri Apr 5 07:12:34 2024]Cluster is ready. | | upgrade_events[14] | | | task_name | StartControllerOnFollowers | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:12:34 | | end_time | 2024-04-05 07:12:36 | | status | True | | message | Success | | duration | 2 sec | | sub_tasks[1] | [Fri 05 Apr 2024 07:12:36 AM UTC] Leader already started. Skip step. | | upgrade_events[15] | | | task_name | WaitUntilClusterReady | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:12:36 | | end_time | 2024-04-05 07:13:10 | | status | True | | message | Success | | duration | 34 sec | | sub_tasks[1] | [Fri Apr 5 07:12:36 2024]Waiting for cluster to be ready.::UC | | | UC::[Fri Apr 5 07:13:10 2024]Cluster is ready. | | upgrade_events[16] | | | task_name | PostRestoreOps | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:10 | | end_time | 2024-04-05 07:13:10 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:13:10 2024] Signed all se pkgs after config restore | | upgrade_events[17] | | | task_name | SyncClusterData | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:10 | | end_time | 2024-04-05 07:13:11 | | status | True | | message | Success | | duration | 1 sec | | sub_tasks[1] | [Fri Apr 5 07:13:10 2024]Saving cluster config on leader.::UC | | | UC::[Fri Apr 5 07:13:11 2024]Interfaces and routes data synced across cluster. | | upgrade_events[18] | | | task_name | UpdateRequest | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:11 | | end_time | 2024-04-05 07:13:11 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:13:11 2024]Updated request with missing data. | | upgrade_events[19] | | | task_name | SaveTaskJournals | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:11 | | end_time | 2024-04-05 07:13:11 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:13:11 2024] Saved Task Journals Uuids taskjournal-cad449c9-c50d-4 | | | f7d-8e7e-8a67f1b4bf28, taskjournal-a67316e0-65cd-4032-8371-2874f4393a40, taskjou | | | rnal-af9c1be5-6372-40fb-bf4b-b28994853758, taskjournal-fd60d9f5-9151-4a18-9dbc-4 | | | 3f63921b494, Errors | | upgrade_events[20] | | | task_name | BlockConfiguration | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:11 | | end_time | 2024-04-05 07:13:11 | | status | True | | message | Success | | duration | 0 sec | | upgrade_events[21] | | | task_name | PausePlacement | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:11 | | end_time | 2024-04-05 07:13:11 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:13:11 2024]Pause placement. | | upgrade_events[22] | | | task_name | SignalCMDToFollowersUpgradeCompleted | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:11 | | end_time | 2024-04-05 07:13:11 | | status | True | | message | Success | | duration | 0 sec | | upgrade_events[23] | | | task_name | RestartServiceLocally | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:11 | | end_time | 2024-04-05 07:13:13 | | status | True | | message | Success | | duration | 2 sec | | sub_tasks[1] | [Fri Apr 5 07:13:13 2024] Restarted services remote_task_manager.service | | upgrade_events[24] | | | task_name | CleanUp | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:13 | | end_time | 2024-04-05 07:13:16 | | status | True | | message | Success | | duration | 3 sec | | upgrade_events[25] | | | task_name | InvokeCloudDiscovery | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:16 | | end_time | 2024-04-05 07:13:16 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:13:16 2024]Invoked cloud discovery successfully | | upgrade_events[26] | | | task_name | PostRestoreValidation | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:16 | | end_time | 2024-04-05 07:13:16 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:13:16 2024] Running post restore validations CountValidation: | | | Successfully restored: | | | 1 Cloud(s) | | | 1 ServiceEngineGroup(s) | | upgrade_events[27] | | | task_name | UnBlockConfiguration | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:16 | | end_time | 2024-04-05 07:13:16 | | status | True | | message | Success | | duration | 0 sec | | upgrade_events[28] | | | task_name | ResumePlacement | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:16 | | end_time | 2024-04-05 07:13:16 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:13:16 2024]Resumed placement. | | upgrade_events[29] | | | task_name | SetFullUpgradeCompleted | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:16 | | end_time | 2024-04-05 07:13:19 | | status | True | | message | Success | | duration | 3 sec | | sub_tasks[1] | [Fri Apr 5 07:13:16 2024]Full upgrade completed successfully. | | upgrade_events[30] | | | task_name | GenerateUpgradeEvent_RESTORE_CONTROLLER_COMPLETED | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:19 | | end_time | 2024-04-05 07:13:19 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:13:19 2024]Generated Event RESTORE_CONTROLLER_COMPLETED. | | upgrade_events[31] | | | task_name | MarkUpgradeState_UPGRADE_FSM_COMPLETED | | sub_events[1] | | | ip | node1.controller.local | | start_time | 2024-04-05 07:13:19 | | end_time | 2024-04-05 07:13:19 | | status | True | | message | Success | | duration | 0 sec | | sub_tasks[1] | [Fri Apr 5 07:13:19 2024]Marked upgrade request to UPGRADE_FSM_COMPLETED. | | start_time | 2024-04-05 07:00:34 | | end_time | 2024-04-05 07:13:19 | | duration | 765 | | enable_rollback | False | | enable_patch_rollback | False | | total_tasks | 31 | | tasks_completed | 31 | | system | True | | progress | 100 percent | | image_path | /host/pkgs/30.2.1-9044-20240404.235217/controller.pkg | | upgrade_readiness | | | state | | | state | UPGRADE_PRE_CHECK_SUCCESS | | last_changed_time | Fri Apr 5 07:00:33 2024 ms(614788383) UTC | | checks[1] | | | check_code | SYSERR_CHECK_CONFIG_VERSION | | description | 'Checking if backup config version can be restored on current controller version | | | .' | | details[1] | Current controller version(30.2.1) supports restore from requested config versio | | | n 30.2.1. | | state | UPGRADE_PRE_CHECK_SUCCESS | | start_time | 2024-04-05 07:00:22 | | end_time | 2024-04-05 07:00:22 | | duration | 0 sec | | checks[2] | | | check_code | SYSERR_CHECK_CLUSTER_STATE | | description | 'Checking Controller Cluster readiness for Restore operations.' | | details[1] | All Cluster nodes are in ACTIVE state. Cluster is ready for Upgrade operations. | | state | UPGRADE_PRE_CHECK_SUCCESS | | start_time | 2024-04-05 07:00:22 | | end_time | 2024-04-05 07:00:22 | | duration | 0 sec | | checks[3] | | | check_code | SYSERR_CHECK_SE_GROUP_UPGRADE_OPS_INPROGRESS | | description | 'Checking ServiceEngineGroup has an ongoing upgrade operation.' | | details[1] | Upgrade operations like Upgrade/Patch/Rollback/RollbackPatch are not currently t | | | aking place for ServiceEngineGroups. | | state | UPGRADE_PRE_CHECK_SUCCESS | | start_time | 2024-04-05 07:00:22 | | end_time | 2024-04-05 07:00:22 | | duration | 0 sec | | checks[4] | | | check_code | SYSERR_CHECK_CONFIG_FIPS | | description | 'Checking if backup config version has FIPS enabled.' | | details[1] | Requested config has FIPS as false and the current controller setup has FIPS as | | | false. | | state | UPGRADE_PRE_CHECK_SUCCESS | | start_time | 2024-04-05 07:00:22 | | end_time | 2024-04-05 07:00:22 | | duration | 0 sec | | checks[5] | | | check_code | SYSERR_CHECK_CONFIG_IMAGES | | description | 'Checking if all images objects from the config are present on the controller.' | | details[1] | Validated images (30.2.1-9044-20240404.235217) for restore. | | state | UPGRADE_PRE_CHECK_SUCCESS | | start_time | 2024-04-05 07:00:22 | | end_time | 2024-04-05 07:00:22 | | duration | 0 sec | | checks[6] | | | check_code | SYSERR_CHECK_CONFIG_ENV | | description | 'Checking if FIPS mode of backup configuration against controller environment.' | | details[1] | In the requested configuration, FIPS is set to false hence skipping this check | | state | UPGRADE_PRE_CHECK_SUCCESS | | start_time | 2024-04-05 07:00:22 | | end_time | 2024-04-05 07:00:22 | | duration | 0 sec | | checks[7] | | | check_code | SYSERR_CHECK_CONFIG_ACTIVE_VERSIONS | | description | 'Checking if active versions post restore exceeds max allowed active versions.' | | details[1] | System Active versions:[30.2.1] are with in the limit of maximum allowable activ | | | e versions[2]. | | state | UPGRADE_PRE_CHECK_SUCCESS | | start_time | 2024-04-05 07:00:22 | | end_time | 2024-04-05 07:00:22 | | duration | 0 sec | | checks[8] | | | check_code | SYSERR_CHECK_CONTROLLER_PACKAGE | | description | 'Checking if system package exists in disk.' | | details[1] | Validated controller package for the current version. | | state | UPGRADE_PRE_CHECK_SUCCESS | | start_time | 2024-04-05 07:00:22 | | end_time | 2024-04-05 07:00:22 | | duration | 0 sec | | checks[9] | | | check_code | SYSERR_CHECK_VERSION_MIGRATION | | description | 'Checking if migration across versions are required for restore operation.' | | details[1] | Configuration version 30.2.1 is same as controller version 30.2.1. | | state | UPGRADE_PRE_CHECK_SUCCESS | | start_time | 2024-04-05 07:00:22 | | end_time | 2024-04-05 07:00:22 | | duration | 0 sec | | checks[10] | | | check_code | SYSERR_CHECK_CLUSTER_DISK_SPACE | | description | 'Checking Controller Cluster disk space for Restore operations.' | | details[1] | Node 100.65.9.244 has enough disk space to perform an upgrade operation, (Path:' | | | /', Available: 87GB, Required: 10GB). | | state | UPGRADE_PRE_CHECK_SUCCESS | | start_time | 2024-04-05 07:00:22 | | end_time | 2024-04-05 07:00:23 | | duration | 0 sec | | checks[11] | | | check_code | SYSERR_UPGRADE_OPS_IN_PROGRESS | | description | 'Check if upgrade operation is already in progress.' | | details[1] | Upgrade operations like Upgrade/Patch/Rollback/RollbackPatch are not currently t | | | aking place for Controllers. | | state | UPGRADE_PRE_CHECK_SUCCESS | | start_time | 2024-04-05 07:00:22 | | end_time | 2024-04-05 07:00:24 | | duration | 2 sec | | checks[12] | | | check_code | SYSERR_CHECK_CONFIG | | description | 'Checking if the configuation is valid.' | | details[1] | Validated requested config for restore. | | state | UPGRADE_PRE_CHECK_SUCCESS | | start_time | 2024-04-05 07:00:22 | | end_time | 2024-04-05 07:00:33 | | duration | 11 sec | | start_time | 2024-04-05 07:00:21 | | end_time | 2024-04-05 07:00:33 | | duration | 12 sec | | upgrade_ops | EVAL_RESTORE | | image_ref | 30.2.1-9044-20240404.235217 | | total_checks | 12 | | checks_completed | 12 | | system_report_refs[1] | restore_controller_dee1 | | tenant_ref | admin | | fips_mode | False | +-----------------------+----------------------------------------------------------------------------------+
Use the show serviceengine command to verify status of Service Engines after restore:
[admin:ctrl-3node1]: > show serviceengine +---------------------------+-----------------+------------------+--------------+---------------+------------+ | Name | Uuid | SE Group | Mgmt IP | Cloud | Oper State | +---------------------------+-----------------+------------------+--------------+---------------+------------+ | 100.65.9.248 | se-005056af9f9e | Default-Group | 100.65.9.248 | Default-Cloud | OPER_UP | | 100.65.9.249 | se-005056afa8dd | Default-Group | 100.65.9.249 | Default-Cloud | OPER_UP | +---------------------------+-----------------+------------------+--------------+---------------+------------+ [admin:ctrl-3node1]
Follow the steps below to restore configuration in a cluster set-up.
Restore the configuration on one of the nodes
Reform the cluster by inviting the two new nodes to the cluster.
Restoring Avi Load Balancer Controller using API Method
API request accepts the exported configuration file and the passphrase which which the configration file was exported as request. The followings are the API specifications to be used to restore an Avi Load Balancer Controller.
API: /api/configuration/restore
Method: POST
Body: {“passphrase”: “_passphrase_”,“skip_warnings”: true}
Content-Type: multipart/form-data.
Single configuration backup file shall be attached with the request.
root@ctrl-3node1:/home/admin# curl -X POST https://100.65.9.244/api/configuration/restore -k --user admin:admin --form file='@/home/admin/config.json' -F passphrase=**** -F consent=true --header 'X-Avi-Version: 30.2.1' { "status": "'Restore of Controller started. Use 'show upgrade status' to check the status.'" }
Use the following curl command to check the restore progress using the API method.
curl -X GET https://100.65.9.244/api/upgradestatusinfo -k --user admin:****