The Terraform script saves the images for NSX Manager and CSM in the Resource Group it creates in the NSX Cloud Management VNet.
You can use these images to redeploy NSX Manager or CSM.
This method of redeploying an image can help with recovering a lost or unusable NSX Manager node. However, you cannot use this method to recover a CSM. This is because NSX Manager is deployed in a three-node cluster and although CSM is joined with this cluster, CSM does not replicate NSX Manager data and NSX Manager nodes do not replicate CSM data. To recover CSM, follow the steps described in "Restore CSM from Microsoft Azure Recovery Services Vault" in the NSX-T Data Center Administration Guide.
Redeploying one NSX Manager node and Attaching it with the NSX Manager cluster
- You have the following NSX Manager nodes:
- Deployment1-NSX-MGR0
- Deployment1-NSX-MGR1
- Deployment1-NSX-MGR2
- You lose NSX Manager node Deployment1-NSX-MGR0.
In the case that one NSX Manager node is lost, you can detach the defunct NSX Manager node, redeploy a new NSX Manager node using the image in your deployment's resource group, and then attach the newly deployed NSX Manager node to the NSX Manager cluster.
- To detach the defunct NSX Manager node from the NSX Manager cluster:
- Log in to either of the working nodes over SSH and run the following NSX CLI command:
Deployment1-NSX-MGR1> detach node <UUID of Deployment1-NSX-MGR0>
- Check the status of the NSX Manager cluster; it shows stable with two healthy nodes:
Deployment1-NSX-MGR1> get cluster status
- Log in to either of the working nodes over SSH and run the following NSX CLI command:
- To create a new NSX Manager node in your Microsoft Azure subscription:
- Navigate to .
- Click Create VM and accept the pre-selected values for fields other than the ones specified in this table:
Parameter Value Basic Virtual machine name Any descriptive name. Size The minimum requirement is: Standard_D4s_v3-4vcpus, 16 GB memory. Authentication type SSH Username Enter the default NSX Manager username: nsxadmin. SSH Public Key Source Select Use existing public key and copy-paste the public key for the NSX Manager node that you have detached from the cluster; from the example, for node Deployment1-NSX-MGR0. Disks OS Disk type Standard HDD Data disks Click Create and attach a new disk and select Standard HDD. for Disk SKU with a custom size of 100 GiB. Note: Ensure that the data disk host caching is set to read/write.Networking Public IP Click Create new and select Static for the Assignment option. NIC network security group Select Advanced Configure network security group Select the network security group created by the Terraform deployment for NSX Manager. From the example in this topic: Deployment1-nsx-mgr-sg Advanced Custom data Copy-paste the following, ensuring that you use your deployment's username and password: #cloud-config hostname: ${hostname} bootcmd: - [cloud-init-per, instance, lvmdiskscan, lvmdiskscan] - [cloud-init-per, instance, secondary_partition, /opt/vmware/nsx-node-api/bin/set_secondary_partition.sh] chpasswd: expire: false list: - nsxadmin:<pwd> - root:<pwd>
- Click Review + create.
The new NSX Manager node is deployed.
- Go to the newly deployed NSX Manager and set its private IP address setting to static.
- Join the newly deployed NSX Manager with the existing NSX Manager cluster:
- Log in to the newly deployed NSX Manager node and run the following NSX CLI command to ensure it is up and running:
Deployment1-NSX-MGR0> get cluster status
- Join this NSX Manager with the cluster. You need the cluster id that you can retrieve from any of the other two NSX Manager nodes that are running:
Deployment1-NSX-MGR0> join <NSX-MGR0-IP> cluster-id <cluster-id> thumbprint <NSX-MGR0 api thumbprint> username <NSX-MGR0 username> password <NSX-MGR0 password>
- After the new NSX Manager node joins the cluster, run the command to check the status of the cluster with all three nodes:
Deployment1-NSX-MGR0> get cluster status
- Log in to the newly deployed NSX Manager node and run the following NSX CLI command to ensure it is up and running: