Restoring a backup restores the state of the network at the time of the backup. In addition, you restore the NSX Manager or the Global Manager appliance configurations as well. For NSX Manager, any changes that you made to the fabric since you performed the backup, such as adding or deleting nodes, get reconciled. After you federate NSX Managers with a Global Manager (GM), they are now known as Local Managers (LM).

Note:

You cannot retain DNS entries (name servers and search domains) when you restore from a backup. To redeploy in a VMware Cloud Foundation (VCF) deployment using an OVF file, you must use FQDNs for the NSX Manager VM names.

You must restore the backup to a new NSX Manager or Global Manager appliance. Follow the instructions for your specific case.

  • If you had a cluster of the NSX Manager appliance when the backup was taken, the restore process restores one node first and then prompts you to add the other nodes. You can add the other nodes during the restore process or after the first node is restored. See the following detailed steps.
  • If you had a cluster of Global Manager appliances, you can only restore one node using the restore process. You must create the cluster after the restore of the first node completes. For instructions on restoring a lost active Global Manager, a lost standby Global Manager, or a lost Local Manager, see Backup and Restore in NSX Federation.
Important: If any nodes in the appliance cluster are still available, you must power them off before you start the restore.

Prerequisites

  • Verify that you have the login credentials for the backup file server.
  • Verify that you have the SSH fingerprint of the backup file server. Starting in NSX-T Data Center 3.2.1, support includes key size 256, 384, and 521. In 3.2.0, support includes only 256 key size. Ensure whatever key size is used at time of backup is used at time of restore.
  • Verify that you have the passphrase of the backup file.
  • Identify which backup you want to restore by following the procedure in Listing Available Backups. Take note of the IP or FQDN of the NSX-T Data Center appliance that took the backup.
  • Ensure the network setup where you are performing the restore has the same set of network connectivity as the system on which you performed the backup. For example, the same VIPs, DNS, NTP communication, and so on. If network connectivity is not same, fix the inconsistencies before adding a second or third node to the restored system.
  • Perform a federated restore when both the active and standby Global Managers are down. If this is not the case, see Backup and Restore in NSX Federation.
  • Familiarize yourself with the Management Plane upgrade process as part of restoring a backup during an upgrade. For details, see Backup and Restore During Upgrade in the NSX-T Data Center Upgrade Guide.

Procedure

  1. If any nodes in the appliance cluster are still available, you must power them off before you start the restore.
  2. Install one new appliance node on which to restore the backup.
    • If the backup listing for the backup you are restoring contains an IP address, you must deploy the new NSX Manager or Global Manager node with the same IP address. Do not configure the node to publish its FQDN.

    • If the backup listing for the backup you are restoring contains an FQDN, you must configure the new appliance node with this FQDN and publish the FQDN. Only lowercase FQDN is supported for backup and restore.

      Note: Until the FQDN is configured and published, the Restore button for the backup is disabled in the newly deployed NSX Manager or Global Manager UI.

      Use this API to publish the NSX Manager or Global Manager FQDN.

      Example request:

      PUT https://<nsx-mgr OR global-mgr>/api/v1/configs/management
      
      {
        "publish_fqdns": true,
        "_revision": 0
      }

      See the NSX-T Data Center API Guide for API details.

      In addition, if the new manager node has a different IP address than the original one, you must update the DNS server's forward and reverse lookup entries for the manager node with the new IP address.

    After the new manager node is running and online, you can proceed with the restore.

  3. From a browser, log in with admin privileges to the NSX Manager or Global Manager at https://<manager-ip-address>.
  4. Select System > Backup & Restore.
  5. To configure the backup file server, click Edit.
    Do not configure automatic backup if you are going to perform a restore.
  6. Enter the IP address or FQDN.
  7. Change the port number, if necessary.
    The default is 22.
  8. In the Directory Path text box, enter the absolute directory path where the backups are stored.
    The path to the backup directory can contain only the following characters: alphanumerics ( a-z , A-Z, 0-9 ), underscore ( _ ) , plus and minus sign ( + - ), tilde and percent sign ( ~ % ), forward slash ( / ), and period (.).
    Avoid using path drive letters or spaces in directory names; they are not supported. If the backup file server is a Windows machine, you must use the forward slash when you specify the destination directory. For example, if the backup directory on the Windows machine is c:\SFTP_Root\backup, specify /SFTP_Root/backup as the destination directory.
  9. To log in to the server, enter the user name and password.
  10. You can leave the SSH Fingerprint blank and accept or reject the fingerprint provided by the server after you click Save in a later step. If necessary, you can retrieve the SSH fingerprint by using this API: POST /api/v1/cluster/backups?action=retrieve_ssh_fingerprint.
  11. Enter the passphrase that was used to encrypt the backup data.
  12. Click Save.
  13. Select a backup.
  14. Click Restore.
  15. The restore process prompts you to take action, if necessary, as it progresses.
    Note: If you are restoring a Global Manager appliance, the following steps do not appear. After restoring the first Global Manager node, you must manually join the other nodes to form the cluster. If you are restoring a multi-site network, see the "Limitations" section of the NSX-T Data Center Multisite topic.
    1. Confirm CM/VC Connectivity: If you want to restore existing compute managers, ensure that they are registered with the new NSX Manager node and available during the restore process.
    2. If you have deleted or added fabric nodes or transport nodes, you are prompted to take certain actions, for example, log in to a node and run a script. If you have created a logical switch or segment since the backup, the logical switch or segment will not appear after the restore.
    3. If the backup has information about a manager cluster, you are prompted to add other nodes. If you decide not to add nodes, you can still proceed with the restore and manually add other nodes to form the cluster after the restore of this node completes.
    4. If there are fabric nodes that did not discover the new manager node, you are provided a list of them.
    5. Storing libraries during backup is not supported in NSX-T Data Center backup. If you are using VMware NSX® Application Platform, you must upload Kubernetes tools if requested.
    A progress bar displays the restore status identifying the step the restore process is on. During the restore process, services on the manager appliance get restarted and the control plane becomes unavailable until restore completes.

    After restore is complete, the Restore Complete screen shows the result of the restore, the timestamp of the backup file, and the start time and end time of the restore operation. If you create any segments after the backup was complete, they do not get restored.

    If restore fails, the screen displays the step where the failure occurred, for example, Current Step: Restoring Cluster (DB) or Current Step: Restoring Node. If either cluster restore or node restore fails, the error might be transient. In that case, you can skip the Retry button. You can restart or reboot the manager and the restore continues.

    You can also determine if there was a cluster or node restore failure by selecting the log files. To view the system log file and search for the strings Cluster restore failed and Node restore failed, run get log-file syslog.

    To restart the manager, run the restart service manager command.

    To reboot the manager, run the reboot command.
    Note:

    Starting with NSX-T Data Center 3.2.2, if you added a compute manager with support for multiple NSX-T Data Center instances and changed that support after getting your backup, your restore might cause the compute manager connection status to be down post-restore due to a change in the vCenter Server extension. For troubleshooting details, see Troubleshoot Multi NSX Restore Issues.

  16. If you have only one node deployed, after the restored manager node is up and functional, you can deploy additional nodes to form a cluster.
    See the NSX-T Data Center Installation Guide for instructions.
  17. If you had other manager cluster VMs that you powered down in step 1, delete them after the new manager cluster is deployed.