Before using vSphere vMotion, you must configure your hosts correctly.
- Each host must be licensed for vSphere vMotion.
- Each host must meet shared storage requirements for vSphere vMotion. See vSphere vMotion Shared Storage Requirements.
- Each host must meet the networking requirements for vSphere vMotion. See What Are the vSphere vMotion Networking Requirements.
vSphere vMotion Across Long Distances
You can perform reliable migrations between hosts and sites that are separated by high network round-trip latency times. vSphere vMotion across long distances is enabled when the appropriate license is installed. No user configuration is necessary.
For long-distance migration, verify the network latency between the hosts and your license.
- The round-trip time between the hosts must be up to 150 milliseconds.
- Your license must cover vSphere vMotion across long distances.
- You must place the traffic related to transfer of virtual machine files to the destination host on the provisioning TCP/IP stack. See How to Place Traffic for Cold Migration, Cloning, and Snapshots on the Provisioning TCP/IP Stack.
vSphere vMotion Shared Storage Requirements
Configure your hosts for vSphere vMotion with shared storage to ensure that virtual machines are accessible to both source and target hosts.
During a migration with vMotion, the migrating virtual machine must be on storage accessible to both the source and target hosts. Ensure that the hosts configured for vMotion use shared storage. Shared storage can be on a Fibre Channel storage area network (SAN), or can be implemented using iSCSI and NAS.
If you use vMotion to migrate virtual machines with raw device mapping (RDM) files, make sure to maintain consistent LUN IDs for RDMs across all participating hosts.
See the vSphere Storage documentation for information on SANs and RDMs.
What Are the vSphere vMotion Networking Requirements
Migration with vMotion requires correctly configured network interfaces on source and target hosts.
Configure each host with at least one network interface for vMotion traffic. To ensure secure data transfer, the vMotion network must be a secure network, accessible only to trusted parties. Additional bandwidth significantly improves vMotion performance. When you migrate a virtual machine with vMotion without using shared storage, the contents of the virtual disk is transferred over the network as well.
vSphere 6.5 and later allow the network traffic with vMotion to be encrypted. Encrypted vMotion depends on host configuration, or on compatibility between the source and destination hosts.
Requirements for Concurrent vMotion Migrations
You must ensure that the vMotion network has at least 250 Mbps of dedicated bandwidth per concurrent vMotion session. Greater bandwidth lets migrations complete more quickly. Gains in throughput resulting from WAN optimization techniques do not count towards the 250-Mbps limit.
To determine the maximum number of concurrent vMotion operations possible, see vCenter Server Limits on Simultaneous Migrations. These limits vary with a host's link speed to the vMotion network.
Round-Trip Time for Long-Distance vMotion Migration
If you have the proper license applied to your environment, you can perform reliable migrations between hosts that are separated by high network round-trip latency times. The maximum supported network round-trip time for vMotion migrations is 150 milliseconds. This round-trip time lets you migrate virtual machines to another geographical location at a longer distance.
Multiple-NIC vMotion
You can configure multiple NICs for vMotion by adding two or more NICs to the required standard or distributed switch. For details, see Knowledge Base article KB 2007467.
Network Configuration
Configure the virtual networks on vMotion enabled hosts as follows:
-
On each host, configure a VMkernel port group for vMotion.
To have the vMotion traffic routed across IP subnets, enable the vMotion TCP/IP stack on the host. See How to Place vSphere vMotion Traffic on the vMotion TCP/IP Stack of Your ESXi Host.
- If you are using standard switches for networking, ensure that the network labels used for the virtual machine port groups are consistent across hosts. During a migration with vMotion, vCenter Server assigns virtual machines to port groups based on matching network labels.
Note:
By default, you cannot use vMotion to migrate a virtual machine that is attached to a standard switch with no physical uplinks configured, even if the destination host also has a no-uplink standard switch with the same label.
To override the default behavior, set the config.migrate.test.CompatibleNetworks.VMOnVirtualIntranet advanced settings of vCenter Server to false. The change takes effect immediately. For details about the setting, see Knowledge Base article KB 1003832. For information about configuring advanced settings of vCenter Server, see vCenter Server Configuration.
- If you use Intrusion Detection Systems (IDS) and firewalls to protect your environment, make sure that you configure them to allow connections to the ports used for vMotion on the ESXi hosts. For the list of currently supported ports for vMotion, see the VMware Ports and Protocols Tool™ at https://ports.esp.vmware.com/home/vSphere.
For information about configuring the vMotion network resources, see Networking Best Practices for vSphere vMotion.
For more information about vMotion networking requirements, see Knowledge Base article KB 59232.
Networking Best Practices for vSphere vMotion
Consider certain best practices for configuring the network resources for vMotion on an ESXi host.
- Provide the required bandwidth in one of the following ways:
Physical Adapter Configuration Best Practices Dedicate at least one adapter for vMotion. Use at least one 10 GbE adapter for workloads that have a small number of memory operations, or if you migrate workloads that have many memory operations.
If only two Ethernet adapters are available, for best availability, combine both adapters into a team, and use VLANs to divide traffic into networks: one or more for virtual machine traffic and one for vMotion.
Direct vMotion traffic to one or more physical NICs that have high-bandwidth capacity and are shared between other types of traffic as well - To distribute and allocate more bandwidth to vMotion traffic across several physical NICs, use multiple-NIC vMotion.
- On a vSphere Distributed Switch 5.1 and later, use vSphere Network I/O Control shares to guarantee bandwidth to outgoing vMotion traffic. Defining shares also prevents from contention as a result from excessive vMotion or other traffic.
- To avoid saturation of the physical NIC link as a result of intense incoming vMotion traffic, use traffic shaping in egress direction on the vMotion port group on the destination host. By using traffic shaping you can limit the average and peak bandwidth available to vMotion traffic, and reserve resources for other traffic types.
- In vSphere 7.0 Update 1 or earlier, vMotion saturates 1 GbE and 10 GbE physical NICs with a single vMotion VMkernel NIC. Starting with vSphere 7.0 Update 2, vMotion saturates high speed links such as 25 GbE, 40 GbE and 100 GbE with a single vMotion VMkernel NIC. If you do not have dedicated uplinks for vMotion, you can use Network I/O Control to limit vMotion bandwidth use.
- Use jumbo frames for best vMotion performance.
Ensure that jumbo frames are enabled on all network devices that are on the vMotion path including physical NICs, physical switches, and virtual switches.
- Place vMotion traffic on the vMotion TCP/IP stack for migration across IP subnets that have a dedicated default gateway that is different from the gateway on the management network. See How to Place vSphere vMotion Traffic on the vMotion TCP/IP Stack of Your ESXi Host.
For information about configuring networking on an ESXi host, see the vSphere Networking documentation.