Configure NSX Manager, host transport nodes to run workload VMs, and NSX Edge VMs on a single cluster. Each host in the cluster provides two physical NICs that is configured for NSX-T.

Important: It is recommended that you deploy the fully collapsed single vSphere cluster topology starting with NSX-T 2.4.2 or 2.5 release.

The topology referenced in this procedure uses:

  • vSAN configured with the hosts in the cluster.
  • A minimum of two physical NICs per host.
  • vMotion and Management VMkernel interfaces.
Figure 1. Topology: Single N-VDS Switch Managing Host Communication with NSX Edge and Guest VMs
Note: Even if the host has four physical NICs, only two NICs can be used to deploy the fully collapsed topology. This procedure references physical NICs on the host as vmnic0 and vmnic1.

Prerequisites

  • All the hosts must be part of a vSphere cluster.
  • Each host has two physical NICs enabled.
  • Register all hosts to a vCenter Server.
  • Verify on the vCenter Server that shared storage is available to be used by the hosts.
  • Ensure that the VLAN ID used for the TEP and the HOST TEP is NSX Edge different.

Procedure

  1. Prepare four ESXi hosts with vmnic0 on vSS or vDS, vmnic1 is free.
  2. On Host 1, install vCenter Server, configure a vSS/vDS port group, and install NSX Manager on the vSS created on the host.
  3. Prepare ESXi hosts 1, 2, 3 and 4 to be transport nodes.
    1. Create transport zones. See Create Transport Zones.
    2. Create an IP pool for tunnel endpoint IP addresses. See Create an IP Pool for Tunnel Endpoint IP Addresses.
    3. Create an uplink profile. See Create an Uplink Profile.
    4. Configure hosts as a transport node by applying transport node profile. See Add a Transport Node Profile. In this step, the transport node profile only migrates vmnic1, the unused physical NIC, to the N-VDS switch.
    Note: After the transport node profile is applied to the cluster hosts, the N-VDS switch is created and vmnic1 is connected to the N-VDS switch.
    vmnic1 on all hosts are added to the N-VDS switch. So, out of the two physical NICs, one is migrated to the N-VDS switch. The vmnic0 interface is still connected to the vSS or vDS switch, which ensures connectivity to the host is available.
  4. In the NSX Manager UI, create VLAN-backed segments for NSX Manager, vCenter Server, NSX Edge.
  5. On Host 2, Host 3, and Host 4, you must migrate the vmk0 adapter and vmnic0 together from VSS/VDS to N-VDS switch. Update the NSX-T configuration on each host. While migrating ensure that vmnic0 is mapped to an active uplink.
  6. In the vCenter Server, go to Host 2, Host 3, and Host 4, and verify that vmk0 adapter is connected to vmnic0 physical NIC on theN-VDS and must be reachable.
  7. In the NSX Manager UI, go to Host 2, Host 3, and Host 4, and verify both pNICs are on the N-VDSswitch.
  8. On Host 2 and Host 3, from the NSX Manager UI, install NSX Manager.
  9. Create a logical segment and attach the NSX Manager to the logical segment. Wait for approximately 10 minutes for the cluster to form and verify that the cluster has formed.
  10. Power off the first NSX Manager node. Wait for approximately 10 minutes.
  11. With the first NSX Manager powered off, perform cold vMotion to migrate the NSX Manager and vCenter Server from host 1 to host 4.

    For vMotion limitations, see https://kb.vmware.com/s/article/56991.

  12. Reattach the NSX Manager and vCenter Server to the previously created logical switch. On host 4, power on the NSX Manager. Wait for approximately 10 minutes to verify that the cluster is in stable state.
  13. From the NSX Manager UI, go to Host 1, migrate vmk0 and vmnic0 together from VSS to N-VDS switch.
  14. In the Network Mapping for Install field, ensure that the vmk0 adapter is mapped to the management logical segment on the N-VDS switch.
  15. On Host 1, install the NSX Edge VM from the NSX Manager UI.
  16. Join the NSX Edge VM with the management plane.
  17. Configure NSX Edge VM with an external router to establish the north-south traffic connectivity.
  18. Verify that north-south traffic connectivity between the NSX Edge VM and the external router.
  19. Set up and verify the BFD connectivity between NSX Manager and NSX Edge VM.
  20. In case of a power failure scenario where the whole cluster is rebooted, the NSX-T management component might not be able to come up and communicate with N-VDS to resume normal function. To avoid this scenario, perform the following steps:
    Caution: Any API command run incorrectly will result in a loss of connectivity with the NSX Manager.
    Note: In a single cluster configuration, management components are hosted on N-VDS switch as VMs. The N-VDS port to which the management component connect to by default is initialized as a blocked port due to security considerations. In case of a power failure where all the four hosts (minimum recommended) need to reboot, the default reboot state the management VM port will be in blocked state. To avoid circular dependencies, it is recommended to create a port on N-VDS in the unblocked state. An unblocked port will ensure that when the cluster is rebooted, the NSX-T management component can communicate with N-VDS to resume normal function.

    At the end of the subtask, the migration command takes the UUID of the host node where the NSX Manager resides and the UUID of the NSX Manager VM and migrates it to the static logical port which will have properties to be in an unblocked state. In the case when all the hosts are powered-off or powered-on or when the NSX Manager VM moves to another host, when a NSX Manager comes back up it can attach to an unblocked port, thus preventing loss of connectivity with the management component of NSX-T.

    1. Go to Advanced Networking & SecuritySwitching, select the MGMT-VLAN-Segment. In the Overview tab, find and copy the UUID. The UUID used in this example is, c3fd8e1b-5b89-478e-abb5-d55603f04452.
    2. Create a logical port with init_state set to UNBLOCKED_VLAN.
    3. Create two JSON files, one for NSX Manager and one for vCenter Server Appliance (VCSA). Replace the value for logical_switch_id with the UUID of the previously created MGMT-VLAN-Segment.
      mgrhost.json
      {
      "admin_state": "UP",
      "attachment": {
      "attachment_type": "VIF",
      "id": "nsxmgr-port-147"
      },
      "display_name": "NSX Manager Node 147 Port",
      "init_state": "UNBLOCKED_VLAN",
      "logical_switch_id": "c3fd8e1b-5b89-478e-abb5-d55603f04452"
      }
    4. Create the logical port for the Manager with your API client or with curl from manager root shell.
      root@nsx-mgr-147:/var/CollapsedCluster# curl -X POST -k -u 
      '<username>:<password>' -H 'Content-Type:application/json' -d @mgr.json https://localhost/api/v1/logical-ports
      {
        "logical_switch_id" : "c3fd8e1b-5b89-478e-abb5-d55603f04452",
        "attachment" : {
          "attachment_type" : "VIF",
          "id" : "nsxmgr-port-147"
        },
        "admin_state" : "UP",
        "address_bindings" : [ ],
        "switching_profile_ids" : [ {
          "key" : "SwitchSecuritySwitchingProfile",
          "value" : "fbc4fb17-83d9-4b53-a286-ccdf04301888"
        }, {
          "key" : "SpoofGuardSwitchingProfile",
          "value" : "fad98876-d7ff-11e4-b9d6-1681e6b88ec1"
        }, {
          "key" : "IpDiscoverySwitchingProfile",
          "value" : "0c403bc9-7773-4680-a5cc-847ed0f9f52e"
        }, {
          "key" : "MacManagementSwitchingProfile",
          "value" : "1e7101c8-cfef-415a-9c8c-ce3d8dd078fb"
        }, {
          "key" : "PortMirroringSwitchingProfile",
          "value" : "93b4b7e8-f116-415d-a50c-3364611b5d09"
        }, {
          "key" : "QosSwitchingProfile",
          "value" : "f313290b-eba8-4262-bd93-fab5026e9495"
        } ],
        "init_state" : "UNBLOCKED_VLAN",
        "ignore_address_bindings" : [ ],
        "resource_type" : "LogicalPort",
        "id" : "02e0d76f-83fa-4839-a525-855b47ecb647",
        "display_name" : "NSX Manager Node 147 Port",
        "_create_user" : "admin",
        "_create_time" : 1574716624192,
        "_last_modified_user" : "admin",
        "_last_modified_time" : 1574716624192,
        "_system_owned" : false,
        "_protection" : "NOT_PROTECTED",
        "_revision" : 0
    5. Move the NSX Manager to the statically created logical port.
    6. To copy the NSX Manager VM Instance ID, go to Advanced Networking & Security → Inventory → Virtual Machines. Select the NSX Manager VM. In the Overview tab, find and copy the ID. The ID used in this example is 5028d756-d36f-719e-3db5-7ae24aa1d6f3.
    7. To find host ID where the NSX Manager is installed, go to System -> Fabric -> Nodes -> Host Transport Node. Click on the host. In the Overview tab, find and copy the host ID. The ID used in this example is 11161331-11f8-45c7-8747-34e7218b687f.
    8. Use an API client or curl from within the NSX Manager root shell to retrieve the transport node details for the host containing the NSX Manager. Replace the transport node UUID in the API with the Node ID of the host where the VM is located.
      root@nsx-mgr-147:/var/CollapsedCluster# curl -k -u 'admin:VMware1!VMware1!' https://localhost/api/v1/transport-nodes/11161331-11f8-45c7-8747-34e7218b687f > mgrhost.json
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      100  3589    0  3589    0     0  22175      0 --:--:-- --:--:-- --:--:-- 22291
      
      root@nsx-mgr-147:/var/CollapsedCluster# ls
      mgrhost.json  mgr.json
    9. Migrate the NSX Manager, vmk0, and vmnic4 from the VM Network to the previously created logical-port on MGMT-VLAN-Segment The value for vnic is the NSX Manager VM's instance UUID suffixed with :4000 to represent the first NIC of that VM followed by vmk0. The vnic_migration_dest value is the attachment ID of the ports created earlier for the NSX Manager.
      root@nsx-mgr-147:/var/CollapsedCluster# curl -k -X PUT -u '<username>:<password>' -H 
      'Content-Type:application/json' -d @mgrhost.json 
      'https://localhost/api/v1/transport-nodes/11161331-11f8-45c7-8747-34e7218b687f?vnic=5028d756-d36f-719e-3db5-7ae24aa1d6f3:4000&vnic_migration_dest=nsxmgr-port-147'
    10. In the NSX Manager UI, ensure that the statically created logical port group is Up.
    11. Repeat the above steps on every NSX Manager in the cluster.