Installation and uninstallation scenarios to consider when you work with vSphere Lifecycle Manager (vLCM) for NSX clusters.

Scenario Result

You try to enable vLCM on a cluster where transport node profile is not applied, but some hosts are individually prepared as host transport nodes.

vLCM cannot be enabled on the cluster because a transport node profile was not applied to the cluster.

You try to enable vLCM on a cluster using a transport node profile configured to apply N-VDS host switch.

vCenter Server checks for cluster eligibility so that the cluster can be converted to a vLCM cluster. As N-VDS host switch type is not supported, apply a transport node profile that is configured to use a VDS host switch.

You move an unprepared host from a non-vLCM cluster to a vLCM cluster.

If the vLCM cluster is prepared with a transport node profile, the unprepared host is prepared as an NSX transport node by vLCM.

If the vLCM cluster is not prepared with a transport node profile, the host remains in unprepared state.

You move a transport node from a vLCM cluster to a non-vLCM cluster that is not prepared for NSX.

The NSX VIBs are deleted from the host, but the NSX Solution-related data (set by vLCM) is not deleted.

Now, if you try to enable vLCM cluster on the cluster, NSX Manager notifies that NSX Solution will be removed from the host. This notfication is misleading because NSX VIBs were already removed on the host.

After you perform the Remove NSX operation on a vSphere Lifecycle Manager cluster, if vLCM is unable to delete NSX from the desired state, all nodes go into Uninstall Failed state. Now, you try to remove NSX on individual transport nodes. If you remove NSX on each individual transport node, then even though NSX VIBs are removed on the host, the cluster continues to have NSX as the desired state in vLCM. This state shows up as drift in host compliance in vCenter Server. So, you must perform Remove NSX on the cluster to remove NSX from the vLCM configuration.
You prepare a vLCM cluster consisting of a host by applying a TNP. The VDS switch type is configured in the TNP. You move the host into maintenance mode and move the vLCM cluster out of the vLCM cluster into datacenter. And finally, move the host back to the vLCM cluster.

NSX installation fails with the following message:

Failed to install software on host. Solution apply failed on host: '192.196.178.156' Deployment status of host bfedeb69-48d3-4f3b-9ebc-ce4eb177a968 is INSTALL_IN_PROGRESS with 0 errors. Expected status is INSTALL_SUCCESSFUL with no errors.Deployment status of host bfedeb69-48d3-4f3b-9ebc-ce4eb177a968 is INSTALL_IN_PROGRESS with 0 errors. Expected status is INSTALL_SUCCESSFUL with no errors.Solution apply failed on host: '192.196.178.156'

Workaround: On the host, click Resolve and reapply TNP.

You move a host that failed installation to a non-vLCM cluster with or without a TNP applied. NSX does not perform any operation.
You move a host that failed installation to a vLCM cluster with TNP applied. NSX installation begins automatically.
You move a host that failed installation to a vLCM cluster without TNP applied. NSX does not perform any operation.
You move a host that failed installation to a datacenter. NSX does not perform any operation.
VMware vCenter is added as a compute manager with Multi NSX flag enabled. Apply TNP on another existing vLCM cluster. NSX allows preparation of the existing vLCM cluster using the TNP.
VMware vCenter is added as a compute manager with Multi NSX flag enabled. Then try to change the already prepared cluster to a vLCM cluster. NSX does not allow preparation of the existing vLCM cluster.
VMware vCenter is added as a compute manager with Multi NSX flag enabled. Then try creating a new vLCM cluster. NSX allows preparation of the existing vLCM cluster.
VMware vCenter already contains a vLCM cluster and you try to add the VMware vCenter as a compute manager with Multi NSX flag enabled. NSX fails this operation because the VMware vCenter already contains a vLCM cluster.
You move a DPU-enabled host from a TNP applied cluster to non-TNP applied cluster. Vibs are not deleted from ESXi host and DPU. You need to remediate the host from vSphere Lifecycle Manage. vSphere Lifecycle Manage deletes NSX vibs from ESXi host and DPU, and reboots the host.
You remove NSX VIBs from a DPU-enabled host by using 'del nsx' nsxcli. After running the 'del nsx' command, you need to reboot the ESXi host to complete the NSX VIBs removal process (VIBs are removed from ESXi and DPU).
NSX VIBs are not deleted from DPU-enabled host even if it is shown as Not Configured on the NSX UI.

When a DPU-enabled host is disconnected during the Remove NSX operation, the operation fails. After a while, the TN deletion continues and the host is displayed as Not Configured but the VIBs are not removed.

Workaround: Go to the vLCM UI and remediate the cluster.
Note: The DPU-enabled host reboots as part of the remediation.
When you enable vLCM on a cluster that is configured using a transport node profile the transition is successful. But the vLCM remediation (Apply NSX task on vLCM) fails. Host transport nodes will continue to have their status before enabling vLCM. Go to the vLCM UI to check whether the Apply NSX operation has failed. Also, verify cluster details to find more details about the drift between the desired state and host.
You cannot prepare a host that is already in Uninstall Failed state. The host is in dirty state, which means that the transport node record and files are not completely removed from the host.

Before you prepare a failed host, delete transport node entry by using the Force Delete option. This operation deletes the transport node record. Then, using del nsx command remove NSX from the host. After this try to prepare host or move host into a TNP applied cluster.