When you update vSphere objects in a cluster with vSphere Distributed Resource Scheduler (DRS), vSphere High Availability (HA), and vSphere Fault Tolerance (FT) enabled, you can temporarily disable vSphere Distributed Power Management (DPM), HA admission control, and FT for the entire cluster. When the update completes, Update Manager restores these features.
Updates might require the host to enter maintenance mode during remediation. Virtual machines cannot run when a host is in maintenance mode. To ensure availability, vCenter Server can migrate virtual machines to other ESXi hosts within a cluster before the host is put into maintenance mode. vCenter Server migrates the virtual machines if the cluster is configured for vSphere vMotion, and if DRS is enabled.
Еnable Enhanced vMotion Compatibility (EVC) to help ensure vSphere vMotion compatibility between the hosts in the cluster. EVC ensures that all hosts in a cluster present the same CPU feature set to virtual machines, even if the actual CPUs on the hosts differ. Use of EVC prevents migrations with vSphere vMotion from failing because of incompatible CPUs. You can enable EVC only in a cluster where host CPUs meet the compatibility requirements. For more information about EVC and the requirements that the hosts in an EVC cluster must meet, see vCenter Server and Host Management.
If a host has no running virtual machines, DPM might put the host in standby mode and interrupt an Update Manager operation. To make sure that scanning and staging complete successfully, Update Manager disables DPM during these operations. To ensure a successful remediation, have Update Manager disable DPM and HA admission control before the remediation operation. After the operation completes, Update Manager restores DPM and HA admission control. Update Manager disables HA admission control before staging and remediation but not before scanning.
If DPM has already put hosts in standby mode, Update Manager powers on the hosts before scanning, staging, and remediation. After the scanning, staging, or remediation is complete, Update Manager turns on DPM and HA admission control and lets DPM put hosts into standby mode, if needed. Update Manager does not remediate powered off hosts.
If hosts are put into standby mode and DPM is manually disabled for a reason, Update Manager does not remediate or power on the hosts.
Within a cluster, temporarily disable HA admission control to let vSphere vMotion to proceed. This action prevents downtime of the machines on the hosts that you remediate. After the remediation of the entire cluster, Update Manager restores HA admission control settings.
If FT is turned on for any of the virtual machines on hosts within a cluster, temporarily turn off FT before performing any Update Manager operations on the cluster. If FT is turned on for any of the virtual machines on a host, Update Manager does not remediate that host. Remediate all hosts in a cluster with the same updates, so that FT can be reenabled after the remediation. A primary virtual machine and a secondary virtual machine cannot reside on hosts of different ESXi version and patch levels.
- The host remediation process might take an extensive amount of time to complete.
- By design, only one host from a vSAN cluster can be in a maintenance mode at any time.
- Update Manager remediates hosts that are part of a vSAN cluster sequentially even if you set the option to remediate the hosts in parallel.
-
If a host is a member of a vSAN cluster, and any virtual machine on the host uses a VM storage policy with a setting for "Number of failures to tolerate=0", the host might experience unusual delays when entering maintenance mode. The delay occurs because vSAN has to migrate the virtual machine data from one disk to another in the vSAN datastore cluster. Delays might take up to hours. You can work around this by setting the "Number of failures to tolerate=1" for the VM storage policy, which results in creating two copies of the virtual machine files in the vSAN datastore.