You can uninstall NSX from a single host that is managed by vCenter Server. The other hosts in the cluster are not affected.

Prerequisites

  • On an ESXi host that is put into Locked state, ensure that the root user is added to the exception list, so that an SSH session can be established with the host.

Procedure

  1. From a browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-ip-address> or https://<nsx-manager-fqdn>.
  2. Select System > Fabric > Nodes.
  3. From the Managed by drop-down menu, select the vCenter Server.
  4. If the cluster has a transport node profile applied, select the cluster, and click Actions > Detach TN Profile.
    If the cluster has a transport node profile applied, the NSX Configuration column for the cluster displays the profile name.
  5. Select the host and click Remove NSX.
  6. Verify that the NSX software is removed from the host.
    1. Log into the host's command-line interface as root.
    2. Run this command to check for NSX VIBs
      esxcli software vib list | grep -E 'nsx|vsipfwlib'
  7. If a Transport Node Profile is applied to the cluster, and you want to reapply it, select the cluster, click Configure NSX, and select the profile from the Select Deployment Profile drop-down menu.
  8. (Host on a VDS 7.0 switch) If the host goes into failed state and NSX VIBs cannot be removed, run the nsxcli -c del nsx command to remove NSX from the host.
    1. Before running the del nsx command, do the following:
      • If there are VMkernel adapters on NSX port groups on the VDS switch, you must manually migrate or remove vmks from NSX port group to DV port groups on the VDS switch. If there are any vmks available on the NSX port groups, del nsx command execution fails.
      • Put the ESXi host in maintenance mode. The vCenter Server does not allow the host to be put in maintenance mode unless all running VMs on the host are in powered off state or moved to a different host.
      • Permanently disconnect the ESXi host transport node from NSX Manager by stopping nsx-proxy service running on the ESX host transport node. Log in to the ESXi CLI terminal and run /etc/init.d/nsx-proxy stop.
      • Refresh the NSX Manager UI.
      • Verify that the state of the ESXi host transport node is Disconnected from NSX Manager.
    2. Disable SNMP on the ESXi host.
      esxcli system snmp set --enable false
    3. Log in to the ESXi CLI terminal.
    4. Run nsxcli -c del nsx.
    5. Read the warning message. Enter Yes if you want to go ahead with NSX uninstallation.
      Carefully read the requirements and limitations of this command:
      1. Read NSX documentation for 'Remove a Host from NSX or Uninstall NSX Completely'.
      2. Deletion of this Transport Node from the NSX UI or API failed, and this is the last resort.
      3. If this is an ESXi host:
         a. The host must be in maintenance mode.
         b. All resources attached to NSXPGs must be moved out.
         If the above conditions for ESXi hosts are not met, the command WILL fail.
      4. For command progress check /scratch/log/nsxcli.log on ESXi host or /var/log/nsxcli.log on non-ESXi host.
      Are you sure you want to remove NSX on this host? (yes/no)
      Important: After running the del nsx command, do not use the Resolve functionality in the NSX Manager UI to reprepare the host that is in Disconnected state . If you use the Resolve functionality, the host might go into Degraded state.
    6. Select each host and click Remove NSX.
    7. In the popup window, select Force Delete and begin uninstallation.
    8. On the ESXi host, verify that system message displayed is Terminated. This message indicates that NSX is completely removed from the host.
      • All existing host switches are removed, transport node is detached from NSX Manager, and NSX VIBs are removed. If any NSX VIBs remain on the host, contact VMware support.
      • On a host part of a vSphere Lifecycle Manager, after you perform del nsx and Remove NSX from NSX Manager, the host status in vCenter Server is compliant with the cluster image. The system displays, All hosts in the cluster are compliant.