Use this procedure if "Applied To" is configured in any of the DFW rules (this means that "Applied To" is not set to "DFW").

Note: For NSX-V to NSX-T migration, see the KB article for more information.

For NSX-T to NSX-V migration, migrating a workload VM back to NSX-V might not work because the distributed firewall filter in NSX-T is always higher than in NSX-V. The workaround is to place the workload VM in the NSX-T exclusion list prior to vMotion.


  • Ensure that:
    • vSphere vMotion is enabled on the VMkernel adapter of each host in the cluster that is involved in this migration. For detailed steps about enabling vMotion on the VMkernel adapter, see the vSphere product documentation.
    • The destination host in NSX-T has sufficient resources to receive the migrated VMs.
    • The source and destination hosts are in an operational state. Resolve any problems with hosts including disconnected states.

For more information about vMotion, see Migration with vMotion in the vSphere product documentation.


  1. Run a script to specify an external-id for each vNIC in a VM and to vMotion the VM to the correct logical port..
    Here is python code to specify an external-id for each vNIC in a VM so that the vNICs will connect to an NSX-T logical-switch of id “ls_id” at the correct ports:
    devices = vmObject.config.hardware.device
    nic_devices = [device for device in devices if isinstance(device, vim.Vm.device.VirtualEthernetCard)]
    vnic_changes = []
    for device in nic_devices:
        vif_id = vmObject.config.instanceUuid + ":" + str(device.key)
        vnic_spec = self._get_nsxt_vnic_spec(device, ls_id, vif_id)
    relocate_spec = vim.Vm.RelocateSpec()
    # set other fields in the relocate_spec
    vmotion_task = vmObject.Relocate(relocate_spec)
    def _get_nsxt_vnic_spec(self, device, ls_id, vif_id):
        nsxt_backing = vim.Vm.Device.VirtualEthernetCard.OpaqueNetworkBackingInfo()
        dev_spec = vim.Vm.Device.VirtualDeviceSpec()
        return dev_spec

    For an example of a complete script, see

  2. Apply the Security Tags and VM static membership to the migrated VMs.
    POST https://{nsxt-mgr-ip}/api/v1/migration/vmgroup?action=post_migrate
    The vmgroup API endpoint with post_migrate action applies the NSX-V Security Tags to the migrated workload VMs on the NSX-T overlay segment.

    For an example request body of this API, see the Lift and Shift Migration Process section of the NSX Tech Zone article.

  3. Finalize the infrastructure to finish the migration.
    POST https://{nsxt-mgr-ip}/api/v1/migration?action=finalize_infra
    This migration API deletes any temporary object configurations that were created during the migration, and ensures that the NSX-T infrastructure is in a clean state. For example, temporary IP Sets are removed from the Groups.

    This POST API does not have a request body.

  4. Verify that the expected configuration items have been migrated to the NSX-T environment.
    For example, check whether the following configurations are migrated successfully:
    • User-defined Distributed Firewall rules.
    • All Grouping objects, such as IP Sets, Groups, Tags, and so on.
    • Effective members are displayed in the dynamic Groups.
    • Tags are applied to migrated workload VMs.
  5. On the Migrate Workloads page, click Finish.
    A dialog box appears to confirm finishing the migration. If you finish the migration, all migration details are cleared. You can no longer review the settings of this migration. For example, which inputs were made on the Resolve Configuration page.

What to do next

After the migration of workload VMs and the DFW-only configuration is successful and thoroughly verified, remove the Layer 2 bridge to release the NSX-T Edge that you used for bridging.