When you update vSphere objects in a cluster with DRS, VMware High Availability (HA), and VMware Fault Tolerance (FT) enabled, you can choose to temporarily disable VMware Distributed Power Management (DPM), HA admission control, and FT for the entire cluster. When the update completes, Update Manager restores these features.

Updates might require that the host enters maintenance mode during remediation. Virtual machines cannot run when a host is in maintenance mode. To ensure availability, vCenter Server can migrate virtual machines to other ESX/ESXi hosts within a cluster before the host is put into maintenance mode. vCenter Server migrates the virtual machines if the cluster is configured for vMotion, and if DRS is enabled.

You should enable Enhanced vMotion Compatibility (EVC) to help ensure vMotion compatibility between the hosts in the cluster. EVC ensures that all hosts in a cluster present the same CPU feature set to virtual machines, even if the actual CPUs on the hosts differ. Use of EVC prevents migrations with vMotion from failing because of incompatible CPUs. EVC can only be enabled in a cluster where host CPUs meet the compatibility requirements. For more information about EVC and the requirements that the hosts in an EVC cluster must meet, see vCenter Server and Host Management.

If a host has no running virtual machines, VMware DPM might put the host in standby mode and interrupt an Update Manager operation. To make sure that scanning and staging complete successfully, Update Manager disables VMware DPM during these operations. To ensure successful remediation, you should allow Update Manager to disable VMware DPM and HA admission control before the remediation operation. After the operation completes, Update Manager restores VMware DPM and HA admission control. Update Manager disables HA admission control before staging and remediation but not before scanning.

If VMware DPM has already put hosts in standby mode, Update Manager powers on the hosts before scanning, staging, and remediation. After the scanning, staging, or remediation is complete, Update Manager turns on VMware DPM and HA admission control and lets VMware DPM put hosts into standby mode, if needed. Update Manager does not remediate powered off hosts.

If hosts are put into standby mode and VMware DPM is manually disabled for a reason, Update Manager does not remediate or power on the hosts.

Within a cluster, you should select to temporarily disable HA admission control to allow vMotion to proceed, in order to prevent downtime of the machines on the hosts you remediate. After the remediation of the entire cluster, Update Manager restores HA admission control settings.

If FT is turned on for any of the virtual machines on hosts within a cluster, you should select to temporarily turn off FT before performing any Update Manager operations on the cluster. If FT is turned on for any of the virtual machines on a host, Update Manager does not remediate that host. You should remediate all hosts in a cluster with the same updates, so that FT can be re-enabled after the remediation, because a primary virtual machine and a secondary virtual machine cannot reside on hosts of different ESX/ESXi version and patch level.

There are some specifics about remediating hosts that are part of a Virtual SAN cluster:

  • The host remediation process might take extensive amount of time to complete.

  • By design only one host from a Virtual SAN cluster can be in a maintenance mode at any time.

  • Update Manager remediates hosts that are part of a Virtual SAN cluster sequentially even if you select the option to remediate them in parallel.

  • If a host is a member of a Virtual SAN cluster, and any virtual machine on the host uses a VM storage policy with a setting for "Number of failures to tolerate=0", the host might experience unusual delays when entering maintenance mode. The delay occurs because Virtual SAN has to migrate the virtual machine data from one disk to another in the Virtual SAN datastore cluster. Delays might take up to hours. You can workaround this by setting the "Number of failures to tolerate=1" for the VM storage policy, which results in creating two copies of the virtual machine files in the Virtual SAN datastore.