Configure NSX Manager, host transport nodes to run workload VMs, and NSX Edge VMs on a single cluster. Each host in the cluster provides two physical NICs that are configured for NSX-T.

Important: Deploy the fully collapsed single vSphere cluster topology starting with NSX-T 2.4.2 or 2.5 release.

The topology referenced in this procedure uses:

  • vSAN configured with the hosts in the cluster.
  • A minimum of two physical NICs per host.
  • vMotion and Management VMkernel interfaces.
Figure 1. Topology: Single N-VDS Switch Managing Host Communication with NSX Edge and Guest VMs
Note: Even if the host has four physical NICs, only two NICs can be used to deploy the fully collapsed topology. This procedure references physical NICs on the host as vmnic0 and vmnic1.


  • All the hosts must be part of a vSphere cluster.
  • Each host has two physical NICs enabled.
  • Register all hosts to a vCenter Server.
  • Verify on the vCenter Server that shared storage is available to be used by the hosts.
  • Ensure that the VLAN ID used for the TEP and the HOST TEP is NSX Edge different.


  1. Prepare four ESXi hosts with vmnic0 on vSS or vDS, vmnic1 is free.
  2. On Host 1, install vCenter Server, configure a vSS/vDS port group, and install NSX Manager on the port group created on the host.
  3. Prepare ESXi hosts 1, 2, 3 and 4 to be transport nodes.
    1. Create VLAN transport zones with a named teaming policy. See Create Transport Zones.
    2. Create an IP pool or DHCP for tunnel endpoint IP addresses for the hosts. See Create an IP Pool for Tunnel Endpoint IP Addresses.
    3. Create an IP pool or DHCP for tunnel endpoint IP addresses for the Edge node. See Create an IP Pool for Tunnel Endpoint IP Addresses.
    4. Create an uplink profile with a named teaming policy. See Create an Uplink Profile.
    5. Configure hosts as a transport node by applying transport node profile. In this step, the transport node profile only migrates vmnic1, the unused physical NIC, to the N-VDS switch. After the transport node profile is applied to the cluster hosts, the N-VDS switch is created and vmnic1 is connected to the N-VDS switch. See Add a Transport Node Profile.
    vmnic1 on all hosts are added to the N-VDS switch. So, out of the two physical NICs, one is migrated to the N-VDS switch. The vmnic0 interface is still connected to the vSS or vDS switch, which ensures connectivity to the host is available.
  4. In the NSX Manager UI, create VLAN-backed segments for NSX Manager, vCenter Server, NSX Edge. Ensure to select the correct teaming policy for each of the VLAN-backed segments.
  5. On Host 2, Host 3, and Host 4, you must migrate the vmk0 adapter and vmnic0 together from VSS/VDS to N-VDS switch. Update the NSX-T configuration on each host. While migrating ensure that vmnic0 is mapped to an active uplink.
  6. In the vCenter Server, go to Host 2, Host 3, and Host 4, and verify that vmk0 adapter is connected to vmnic0 physical NIC on theN-VDS and must be reachable.
  7. In the NSX Manager UI, go to Host 2, Host 3, and Host 4, and verify both pNICs are on the N-VDSswitch.
  8. Create a logical segment and attach the NSX Manager to the logical segment. Wait for approximately 10 minutes for the cluster to form and verify that the cluster has formed.
  9. On Host 2 and Host 3, from the NSX Manager UI, install NSX Manager.
  10. Power off the first NSX Manager node. Wait for approximately 10 minutes.
  11. Reattach the NSX Manager and vCenter Server to the previously created logical switch. On host 4, power on the NSX Manager. Wait for approximately 10 minutes to verify that the cluster is in a stable state. With the first NSX Manager powered off, perform cold vMotion to migrate the NSX Manager and vCenter Server from host 1 to host 4.

    For vMotion limitations, see

  12. From the NSX Manager UI, go to Host 1, migrate vmk0 and vmnic0 together from VSS to N-VDS switch.
  13. In the Network Mapping for Install field, ensure that the vmk0 adapter is mapped to the management logical segment on the N-VDS switch.
  14. On Host 1, install the NSX Edge VM from the NSX Manager UI.
  15. Join the NSX Edge VM with the management plane.
  16. To establish the north-south traffic connectivity, configure NSX Edge VM with an external router.
  17. Verify that north-south traffic connectivity between the NSX Edge VM and the external router.
  18. Set up and verify the BFD connectivity between NSX Manager and NSX Edge VM.
  19. If there is a power failure scenario where the whole cluster is rebooted, the NSX-T management component might not come up and communicate with N-VDS. To avoid this scenario, perform the following steps:
    Caution: Any API command that is incorrectly run results in a loss of connectivity with the NSX Manager.
    Note: In a single cluster configuration, management components are hosted on an N-VDS switch as VMs. The N-VDS port to which the management component connects to by default is initialized as a blocked port due to security considerations. If there is a power failure requiring all the four hosts (minimum recommended) to reboot, the default reboot state the management VM port is in a blocked state. To avoid circular dependencies, it is recommended to create a port on N-VDS in the unblocked state. An unblocked port ensures that when the cluster is rebooted, the NSX-T management component can communicate with N-VDS to resume normal function.
    At the end of the subtask, the migration command takes the :
    • UUID of the host node where the NSX Manager resides.
    • UUID of the NSX Manager VM and migrates it to the static logical port which is in an unblocked state.
    If all the hosts are powered-off or powered-on or if an NSX Manager VM moves to another host, then after the NSX Manager comes back up it gets attached to the unblocked port, so preventing loss of connectivity with the management component of NSX-T.
    1. Go to Advanced Networking & SecuritySwitching, select the MGMT-VLAN-Segment. In the Overview tab, find and copy the UUID. The UUID used in this example is, c3fd8e1b-5b89-478e-abb5-d55603f04452.
    2. To create logical ports that are initialized to be in UNBLOCKED_VLAN state, create four JSON files, three for NSX Managers and one for vCenter Server Appliance (VCSA). Replace the value for logical_switch_id with the UUID of the previously created MGMT-VLAN-Segment segment.
      "admin_state": "UP",
      "attachment": {
      "attachment_type": "VIF",
      "id": "nsxmgr-port-147"
      "display_name": "NSX Manager Node 147 Port",
      "init_state": "UNBLOCKED_VLAN",
      "logical_switch_id": "c3fd8e1b-5b89-478e-abb5-d55603f04452"
    3. Create the logical port for the Manager with an API client or using the curl command.
      root@nsx-mgr-147:/var/CollapsedCluster# curl -X POST -k -u 
      '<username>:<password>' -H 'Content-Type:application/json' -d @mgr.json https://localhost/api/v1/logical-ports
        "logical_switch_id" : "c3fd8e1b-5b89-478e-abb5-d55603f04452",
        "attachment" : {
          "attachment_type" : "VIF",
          "id" : "nsxmgr-port-147"
        "admin_state" : "UP",
        "address_bindings" : [ ],
        "switching_profile_ids" : [ {
          "key" : "SwitchSecuritySwitchingProfile",
          "value" : "fbc4fb17-83d9-4b53-a286-ccdf04301888"
        }, {
          "key" : "SpoofGuardSwitchingProfile",
          "value" : "fad98876-d7ff-11e4-b9d6-1681e6b88ec1"
        }, {
          "key" : "IpDiscoverySwitchingProfile",
          "value" : "0c403bc9-7773-4680-a5cc-847ed0f9f52e"
        }, {
          "key" : "MacManagementSwitchingProfile",
          "value" : "1e7101c8-cfef-415a-9c8c-ce3d8dd078fb"
        }, {
          "key" : "PortMirroringSwitchingProfile",
          "value" : "93b4b7e8-f116-415d-a50c-3364611b5d09"
        }, {
          "key" : "QosSwitchingProfile",
          "value" : "f313290b-eba8-4262-bd93-fab5026e9495"
        } ],
        "init_state" : "UNBLOCKED_VLAN",
        "ignore_address_bindings" : [ ],
        "resource_type" : "LogicalPort",
        "id" : "02e0d76f-83fa-4839-a525-855b47ecb647",
        "display_name" : "NSX Manager Node 147 Port",
        "_create_user" : "admin",
        "_create_time" : 1574716624192,
        "_last_modified_user" : "admin",
        "_last_modified_time" : 1574716624192,
        "_system_owned" : false,
        "_protection" : "NOT_PROTECTED",
        "_revision" : 0
    4. Move the NSX Manager to the statically created logical port.
    5. To copy the NSX Manager VM Instance ID, go to Advanced Networking & Security → Inventory → Virtual Machines. Select the NSX Manager VM. In the Overview tab, find and copy the ID. The ID used in this example is 5028d756-d36f-719e-3db5-7ae24aa1d6f3.
    6. To find host ID where the NSX Manager is installed, go to System -> Fabric -> Nodes -> Host Transport Node. Select the host and click the Overview tab. Find and copy the host ID. The ID used in this example is 11161331-11f8-45c7-8747-34e7218b687f.
    7. Migrate the NSX Manager from the VM Network to the previously created logical-port on the MGMT-VLAN-Segment. The vnic_migration_dest value is the attachment ID of the ports created earlier for the NSX Manager.
      root@nsx-mgr-147:/var/CollapsedCluster# curl -k -X PUT -u '<username>:<password>' -H 
      'Content-Type:application/json' -d @mgrhost.json 
    8. In the NSX Manager UI, ensure that the statically created logical port is Up.
    9. Repeat the preceding steps on every NSX Manager in the cluster.