Consider certain best practices for configuring the network resources for vMotion on an ESXi host.
- Provide the required bandwidth in one of the following ways:
Physical Adapter Configuration Best Practices Dedicate at least one adapter for vMotion. Use at least one 10 GbE adapter for workloads that have a small number of memory operations or if you migrate workloads that have many memory operations.
If only two Ethernet adapters are available, for best availability, combine both adapters into a team, and use VLANs to divide traffic into networks: one or more for virtual machine traffic and one for vMotion.
Direct vMotion traffic to one or more physical NICs that have high-bandwidth capacity and are shared between other types of traffic as well - To distribute and allocate more bandwidth to vMotion traffic across several physical NICs, use multiple-NIC vMotion.
- On a vSphere Distributed Switch 5.1 and later, use vSphere Network I/O Control shares to guarantee bandwidth to outgoing vMotion traffic. Defining shares also prevents from contention as a result from excessive vMotion or other traffic.
- To avoid saturation of the physical NIC link as a result of intense incoming vMotion traffic, use traffic shaping in egress direction on the vMotion port group on the destination host. By using traffic shaping you can limit the average and peak bandwidth available to vMotion traffic, and reserve resources for other traffic types.
- Use jumbo frames for best vMotion performance.
Ensure that jumbo frames are enabled on all network devices that are on the vMotion path including physical NICs, physical switches, and virtual switches.
- Place vMotion traffic on the vMotion TCP/IP stack for migration across IP subnets that have a dedicated default gateway that is different from the gateway on the management network. See Place vMotion Traffic on the vMotion TCP/IP Stack of an ESXi Host.
For information about the configuring networking on an ESXi host, see the vSphere Networking documentation.