On this step we will move all consecutives VMs hosting nodes of the cluster. As shared disks are already migrated, we will migrate non-shared disks only. Do not move a VM if pRDMs are still attached to it.
Step 1. Remove pRDMs
Step 2 Migrate VMs
Step 3. Edit VMs configuration after migration
Step 1. Edit the VM configuration. Remove pRDM disk(s), check the checkbox to remove the pRDMs disk pointer file (“Delete files from datastore”) when removing from the last node of the cluster. The data located on pRDMs will not be affected. After removing from all VMs, pRDMs will be available as row LUNs and can be mounted to other VMs on-premises if required.
You may leave the vSCSI controller(s) used to host shared disks. We will reuse it to mount back shared disks after the migration to VMware Cloud on AWS will be completed. If HCX will be used to migrate VMs, vSCSI controllers should be removed or SCSI bus sharing should be set to none.
Step 2. Migrate to the target SDDC using the previously mentioned steps. You can either use the vSphere Client (CGA) or HCX (if the HCX method to be used, vSCSI Controllers should either be removed or SCSI bus sharing property set to none).
NOTE: Migration of all consecutive nodes will take less time compared to the migration of the first node - no pRDMs will be copied, only non-shared local VM disk(s) will be migrated.
Step 3. Wait for the migration to complete. Multiple nodes can be migrated at the same time.
Step 4. Edit the configuration of the migrated VMs. Add shared disk resources back by selecting Add new Device --> Existing Hard Disk and using “shared” disk(s) from the first migrated node (“SQL-DB1Node01” in our example). The following requirements should be met:
NOTE If the SCSI bus sharing is not configured to physical the VM could not be powered on and the following error will be observed: "File system specific implementation of OpenFile (file) failed". Recheck that the SCSI bus sharing is configured as “physical” for all controllers hosting shared disks.