Use this procedure if "Applied To" is configured in any of the DFW rules (this means that "Applied To" is not set to "DFW").

Note: For NSX-V to NSX-T migration, see the KB article https://kb.vmware.com/s/article/56991 for more information.

For NSX-T to NSX-V migration, migrating a workload VM back to NSX-V might not work because the distributed firewall filter in NSX-T is always higher than in NSX-V. The workaround is to place the workload VM in the NSX-T exclusion list prior to vMotion.

Prerequisites

  • Ensure that:
    • vSphere vMotion is enabled on the VMkernel adapter of each host in the cluster that is involved in this migration. For detailed steps about enabling vMotion on the VMkernel adapter, see the vSphere product documentation.
    • The destination host in NSX-T has sufficient resources to receive the migrated VMs.
    • The source and destination hosts are in an operational state. Resolve any problems with hosts including disconnected states.

For more information about vMotion, see Migration with vMotion in the vSphere product documentation.

Procedure

  1. Get the instance UUID of all the VMs that you plan to migrate.
    The instance UUIDs are needed when you make the API call in the next step. See the example at the bottom of this section on how to obtain the instance UUID of a VM.
  2. Run the following POST API request:
    POST https://{nsxt-mgr-ip}/api/v1/migration/vmgroup?action=pre_migrate

    This API creates a logical segment port (VIF) corresponding to the VM instance UUID of each NSX-V workload VM in the VM group that you will be migrating through the Layer 2 bridge to the NSX-T overlay segment. For an example request body of this API, see the Lift and Shift Migration Process section of the NSX Tech Zone article.

  3. Call the API GetVmGroupExecutionDetails. This API is available starting with NSX-T 3.2.2
    Call the API GetVmGroupExecutionDetails to get the result of the pre-migrate API call with the same group_id (and federation_site_id for cross-VC migration). The result includes a "logical_switch_id_to_vm_instance_id_and_vnics_map" list and an optional "failedVmInstanceIds" list, which includes the UUIDs of VMs that are not found in the source VC. For example:
    GET /api/v1/migration/vmgroup/actions/get_vm_group_execution_details?group_id=<group-id>&federation_site_id=<site_id>
    Response:
    {
      "logical_switch_id_to_vm_instance_id_and_vnics_map":[
        {
          "ls_id":"36885723-7581-4696-a195-ef83851dc35f",
          "vm_and_vnics_mapping":[
            {
              "vm_instance_id":"52199e21-6aab-26e4-8c82-069a17d67667",
              "vnics":[
                "4001"
              ]
            },
            {
              "vm_instance_id":"52630e5d-ce6f-fac0-424c-4aa4bdf6bd56",
              "vnics":[
                "4001"
              ]
            }
          ]
        }
      ],
      "failedVmInstanceIds":[
        "501557f6-2197-1fe8-14e5-89898cee5fec"
      ]
    }
  4. Follow the pseudo python code below to write a script to vmotion the VMs.

    For an example, see the Python Example Scripts section of the NSX Tech Zone article.

        define _get_nsx_networks_in_host(self, host):
            ls_id_to_nsx_pgs_map = {}
            for net in host.network:
                if isinstance(net, vim.dvs.DistributedVirtualPortgroup):
                    if hasattr(net.config, 'backingType'):
                        if net.config.backingType == 'nsx' and net.config.logicalSwitchUuid:
                            ls_id_to_nsx_pgs_map[net.config.logicalSwitchUuid] =\
                               [net.key, net.config.distributedVirtualSwitch.uuid]
                elif isinstance(net, vim.OpaqueNetwork):
                    if net.summary.opaqueNetworkType == 'nsx.LogicalSwitch':
                        ls_id_to_nsx_pgs_map[net.summary.opaqueNetworkId] = [None, net.summary.opaqueNetworkId]
            return ls_id_to_nsx_pgs_map
     
         define _get_vms_vnic_to_ls_id_map(self, logical_switch_id_to_vm_instance_id_and_vnics_map):
            vm_uuid_2_vnics_map = {}
            for ls_id_2_vm_vnics in logical_switch_id_to_vm_instance_id_and_vnics_map:
                ls_id = ls_id_2_vm_vnics['ls_id']
                for vm_vnics in ls_id_2_vm_vnics['vm_and_vnics_mapping']:
                    vnic_2_ls_id = vm_uuid_2_vnics_map.get(vm_vnics['vm_instance_id'], {})
                    for vnic in vm_vnics['vnics']:
                        vnic_2_ls_id[vnic] = ls_id
                    vm_uuid_2_vnics_map[vm_vnics['vm_instance_id']] = vnic_2_ls_id
            return vm_uuid_2_vnics_map
        
        def _get_nsxt_vnic_spec(self, device, dvpg_key, switch_id, vif_id):
            If dvpg_key:
                vdsPgConn = vim.dvs.PortConnection()
                vdsPgConn.portgroupKey = dvpg_key
                vdsPgConn.switchUuid = switch_id
                device.backing = vim.vm.device.VirtualEthernetCard.DistributedVirtualPortBackingInfo()
                device.backing.port = vdsPgConn
            else:
                device.backing = vim.vm.device.VirtualEthernetCard.OpaqueNetworkBackingInfo()
                device.backing.opaqueNetworkId = switch_id
                device.backing.opaqueNetworkType = 'nsx.LogicalSwitch'
            device.externalId = vif_id
            dev_spec = vim.Vm.Device.VirtualDeviceSpec()
            dev_spec.SetOperation(vim.Vm.Device.VirtualDeviceSpec.Operation.edit)
            dev_spec.SetDevice(device)
            return dev_spec
        
        def _migrate_vm(self, vmObject, vnic_2_ls_id_map, ls_id_to_nsx_pgs_map):
            devices = vmObject.config.hardware.device
            nic_devices = [device for device in devices if isinstance(device, vim.Vm.device.VirtualEthernetCard)]
            vnic_changes = []
            for device in nic_devices:
                ls_id = vnic_2_ls_id_map.get(str(device.key))
                if ls_id:
                    vif_id = vmObject.config.instanceUuid + ":" + str(device.key)
                    nsx_pg = ls_id_to_nsx_pgs_map.get(ls_id)
                    vnic_spec = self._get_nsxt_vnic_spec(device, nsx_pg[0], nsx_pg[1], vif_id)
                    vnic_changes.append(vnic_spec)
            relocate_spec = vim.Vm.RelocateSpec()
            relocate_spec.SetDeviceChange(vnic_changes)
            # set other fields in the relocate_spec
            vmotion_task = vmObject.Relocate(relocate_spec)
            WaitForTask(vmotion_task)
        
        
        vm_uuid_2_vnics_map = self._get_vms_vnic_to_ls_id_map(logical_switch_id_to_vm_instance_id_and_vnics_map)
        for vm_uuid, vnic_2_ls_id_map in vm_uuid_2_vnics_map.items():
            # get the vmObject by the vm_uuid
            # find a target host that has all the networks needed by this VM
            ls_id_to_nsx_pgs_map = self._get_nsx_networks_in_host(host)
            self._migrate_vm(vmObject, vnic_2_ls_id_map, ls_id_to_nsx_pgs_map)
  5. Apply the Security Tags and VM static membership to the migrated VMs.
    POST https://{nsxt-mgr-ip}/api/v1/migration/vmgroup?action=post_migrate
    The vmgroup API endpoint with post_migrate action applies the NSX-V Security Tags to the migrated workload VMs on the NSX-T overlay segment.

    For an example request body of this API, see the Lift and Shift Migration Process section of the NSX Tech Zone article.

  6. Finalize the infrastructure to finish the migration.
    POST https://{nsxt-mgr-ip}/api/v1/migration?action=finalize_infra
    This migration API deletes any temporary object configurations that were created during the migration, and ensures that the NSX-T infrastructure is in a clean state. For example, temporary IP Sets are removed from the Groups.

    This POST API does not have a request body.

  7. Verify that the expected configuration items have been migrated to the NSX-T environment.
    For example, check whether the following configurations are migrated successfully:
    • User-defined Distributed Firewall rules.
    • All Grouping objects, such as IP Sets, Groups, Tags, and so on.
    • Effective members are displayed in the dynamic Groups.
    • Tags are applied to migrated workload VMs.
  8. On the Migrate Workloads page, click Finish.
    A dialog box appears to confirm finishing the migration. If you finish the migration, all migration details are cleared. You can no longer review the settings of this migration. For example, which inputs were made on the Resolve Configuration page.

Example: Obtaining VM Instance UUID from the vCenter MOB

This example shows how to obtain or confirm a VM's instance UUID from the vCenter Server Managed Object Browser (MOB) at http://{vCenter-IP-Address}/mob. You can also obtain or confirm a VM's instance UUID by making an API call to vSphere.

  1. In a web browser, enter the vCenter Managed Object Browser at http//{vCenter-IP-Address}/mob.
  2. Click content.
  3. Find rootFolder in the Name column, and click the corresponding link in the Value column. For example, group-d1.
  4. Find childEntity in the Name column, and click the corresponding link in the Value column. For example, datacenter-21.
  5. Find hostFolder in the Name column, and click the corresponding link in the Value column. For example, group-h23.
  6. Find childEntity in the Name column. The corresponding Value column contains links to host clusters. Click the appropriate host cluster link. For example, domain-c33.
  7. Find host in the Name column. The corresponding Value column lists the hosts in that cluster by vCenter MOID and hostname. Click the appropriate host link, For example, host-32.
  8. Find vm in the Name column. The corresponding Value column lists the virtual machines by vCenter MOID and hostname. For example, vm-216 (web-01a). Click the VM that you are interested in.
  9. Find config in the Name column. Click config in the Value column.
  10. Find instanceUuid in the Name column. The corresponding Value column lists the VM instance UUID. For example, 502e71fa-1a00-759b-e40f-ce778e915f16.

What to do next

After the migration of workload VMs, you can remove the layer-2 bridge.