If your NSX appliance gets corrupted or outdated then you must deploy the appliance again. Due to Azure restrictions, you must use the Azure CLI to boot the new VM.
To redeploy the NSX Manager nodes on Azure:
- Delete the non-functional NSX Manager and its related NIC or disk entities from the Azure resource group.
- Get the cluster status using the
get cluster status
command through nsxcli on any of the remaining NSX Manager nodes. - Detach the non-functional NSX Manager node from the NSX Manager cluster. For example,
detach node 8992e79f-219f-2c42-be57-c4d576792b78
Node has been detached. Detached node must be deleted permanently.
- Create a custom data as per step 7 of the https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/installation/GUID-71DDCE82-0F4F-4E75-A117-FB398A1FDFCB.html topic.
- Create a new manager node using the following command. The following command will add data disk of 100 GB). Change the data disk and release number as per your requirement. Store the public key in the location where you want to run the following command.
az vm create --name <MP instance name> --resource-group <RG for MP deployment> --admin-username nsxadmin --public-ip-address-allocation static --size Standard_D4_v4 --subnet <subnet_path> --nsg <mgr_nsg_path> --image vmware-inc:nsx-policy-manager:byol_release-3-1:3.5.0 --storage-sku Standard_LRS --data-disk-sizes-gb 100 --authentication-type ssh --ssh-key-values <publickey_path> --custom-data <userdata_txt_path>
- Wait for around 15 minutes for the services and cluster to be up on the new single node cluster.
- Join this new node to the existing cluster by running the following command on the new NSX Manager node through nsxcli.
join 192.168.1.11 cluster-id 95e888bf-d8fb-4974-8da7-13029d7be8f0 username nsxadmin password <password> thumbprint 32135bdbd14fe3cba11e1d91b106c2f1e28e0d464c23bbe3caf88fdf44b0eca2
Data on this node will be lost. Are you sure? (yes/no): yes Join operation successful. Services are being restarted. Cluster may take some time to stabilize.
Wait for around 15 minutes for the three-node cluster to be up and running.