vSphere Cluster Services (vCLS) is activated by default and runs in all vSphere clusters. vCLS ensures that if vCenter Server becomes unavailable, cluster services remain available to maintain the resources and health of the workloads that run in the clusters. vCenter Server is still required to configure and run DRS and HA.
In vSphere 8.0 U3, Embedded vCLS is introduced with new features and functionality. Originally, the vCLS component was a full virtual machine running Photon OS. For vSphere 8.0 U3, vCLS is now based on vSphere Pod technology, sometimes referred to as PodCRX. In this document PodCRX is referred to as vCLS VMs. API interfaces for vCLS have not changed, but the deployment and management of these vCLS components have changed. This warranted a name change. Following the precedent made when Platform Service Controllers went from External to Embedded, vCLS is now referred to External vCLS for releases earlier than vSphere 8.0 U3. vCLS starting with vSphere 8.0 U3 and later is now referred to as Embedded vCLS.
vCLS is upgraded as part of vCenter Server upgrade. For Embedded vCLS, the vCLS upgrade is tied to the upgrade of the ESXi hosts in the cluster. Embedded vCLS is activated once the VC is 8.0 U3 and in clusters which have one or more 8.0 U3 ESXi hosts. The 8.0 U3 VC can run External vCLS and Embedded vCLS depending on the cluster composition.
vCenter deploys Embedded vCLS on clusters of supported hosts running ESX 8.0 U3 or later. For clusters of hosts using prior releases, vCenter deploys External vCLS. The vCenter inventory allows multiple versions of ESX to coexist, which happens during a rolling ESX upgrade. For mixed-version clusters, vCenter will use Embedded vCLS whenever any available hosts in a cluster support it. This creates a point at which a cluster is upgraded from External vCLS to Embedded vCLS when hosts exit Maintenance Mode after upgrading to a supported version. vCenter makes this upgrade seamless by waiting until the first Embedded vCLS VM becomes available before it deactivates External vCLS and destroys those VMs. Additionally, if all supported hosts become unavailable and leave only unsupported hosts, vCenter can revert to External vCLS. However, since this step is based on a host becoming unavailable, it may lead to a period of DRS unavailability between Embedded vCLS teardown and External vCLS deployment.
vCLS uses agent virtual machines to maintain cluster services health. The vCLS agent virtual machines (vCLS VMs) are created when you add hosts to clusters. In External vCLS, up to three vCLS VMs are required to run in each vSphere cluster, distributed within a cluster. External vCLS is also activated on clusters which contain only one or two hosts. In these clusters the number of vCLS VMs is one and two, respectively. In Embedded vCLS up to two vCLS VMs are required to run in each vSphere cluster.
New anti-affinity rules are applied automatically. In External vCLS every three minutes a check is performed, if multiple vCLS VMs are located on a single host they will be automatically redistributed to different hosts. In Embedded vCLS a check is performed every minute.
vCLS VMs run in every cluster even if cluster services like vSphere DRS or vSphere HA are not activated on the cluster. The life cycle operations of vCLS VMs are managed by vCenter Server services like ESX Agent Manager and Workload Control Plane. vCLS VMs do not support NICs.
A cluster activated with vCLS can contain ESXi hosts of different versions if the ESXi versions are compatible with vCenter Server. vCLS works with vSphere Lifecycle Manager clusters.
Embedded vCLS
In vSphere 8.0 U3 Embedded vCLS is introduced, which has new features and improvements.
Differences between External vCLS and Embedded vCLS
In External vCLS the term quorum signifies the minimum number of powered-on vCLS VMs necessary for DRS to function, which is 1. In Embedded vCLS we use the term minimum vCLS VM count, its purpose and value of 1 are unchanged. In order to improve availability of DRS, both types of vCLS provide redundancy for VMs. If a host with one vCLS VM fails, there are other vCLS VMs available to preserve the functionality of DRS. The total number of vCLS VMs deployed in a cluster, assuming there are enough available hosts and resources to deploy redundant vCLS VMs, is called the desired redundancy count. For External vCLS the desired redundancy count is 3. For Embedded vCLS it is reduced to 2.
What is a vCLS VM?
The vCLS VM introduced in vSphere 8.0 Update 3 is based on vSphere Pod technology. The vCLS service is deployed on the host or cluster as a virtual machine on an ESXi host. However, this is a special VM, with a very minimal operating system and uses a container run-time, which is what allows the vCLS VM to run on vSphere. The new vCLS VMs deliver a secure and high-performance run-times.
vCLS VM power off
It is possible to power off Embedded vCLS VMs. When such VM is powered off, vSphere interprets this condition as a failure and proceeds to restart the VM and re-register vCLS VM with hostd.
When the vCLS VM is powered off, it waits for vSphere to restart that VM. If vSphere doesn't restart that VM after timeout, vCLS VM reconfiguration is initiated.
Whenever an Embedded vCLS VM is evacuated by the system, such as for a host that is entering Maintenance Mode, it is destroyed and replaced on another host, instead of using vMotion.
vCLS VMs
Visibility and access to vCLS VMs in Embedded vCLS has parity with External vCLS.
- vCLS VMs are visible in VC.
- In VC inventory hierarchy vCLS VMs reside in dedicated VM folder named vCLS.
- Some operations on vCLS VMs are unavailable.
- Query requests to vCLS VMs are available.
- The naming of an Embedded vCLS VM is the same as with External vCLS VMs: vCLS-{UUID} where UUID is UUID of the ESX host (summary.hardware.uuid).
VM ExtraConfig
To identify vCLS VMs, Embedded vCLS continues to use a specific option in VM ExtraConfig.
The extra config option identifying Embedded vCLS VM is vCLSCRX.agent and the value is true.
Embedded vCLS changes the extension that manages these VMs to be the internal/system vpxd extension.
- The property managedBy in VM configuration of Embedded vCLS VM is set to the key of that VC extension. Having this property helps to identify Embedded vCLS VMs and differentiate them from External vCLS VMs, because this property appears in vSphere.
- The property config.managedBy.extensionKey is set to VirtualCenter.
- The property config.managedBy.type is set to vcls-entity.
Disconnected vCLS VMs
A vCLS VM may become disconnected in VC. There are various scenarios where this can happen.
- Have ESX host running vCLS VM
- Simulate connection problem by stopping vpxa on the host; connection state transitions to notResponding.
- Stop vCLS VM on the host.
- Restart vpxa on the host; connection state transitions to connected.
After the above steps, the vCLS VM is in VC inventory but not on the ESX host. Hence this VM is marked as orphaned. HdcsManager is responsible to detecting and deleting disconnected vCLS VMs. In External vCLS, vCLS detects and deletes disconnected vCLS VMs.
Cluster virtual property vclsVmType
vSphere provides information about whether cluster is running vCLS VMs managed by External vCLS or Embedded vCLS managed by vSphere. The value of this property is Embedded for Embedded vCLS VM cluster, and external for External vCLS VM cluster.
Cleaning up vCLS VM
If a cluster ESX host running vCLS VMs loses connectivity to VC and is removed from the VC inventory, the vCLS VM continues to run on that ESX host. In other words, and unmanaged ESX host may end up running vCLS VM.
When an unmanaged ESX host with running vCLS VMs is added as standalone host, vpxd detects unexpected presence of vCLS VM and stops it.
When an unmanaged ESX host with running vCLS VM is added to VC cluster, vCLS does not stop vCLS VM prior to adding the host to the cluster. Adding the host triggers vCLS reconfiguration workflow. That workflow stops extraneous vCLS VMs.
Deactivate vCLS
External vCLS implements facility that allows to deactivate vCLS VM deployment on per-cluster basis. In a sense, this facility is a stop switch to deactivate External vCLS functionality in case of unforeseen circumstances. The several sample scenarios where this functionality can be useful:
- Cluster has both DRS and HA deactivated. In this case, vCLS VMs provide no use, and you do not want the presence of system VMs in the VC inventory.
- Cluster has DRS deactivated and HA activated. In this case vCLS VM do provide some help for optimally failing over VMs, but you might not want the presence of system VMs in VC inventory for marginal benefit.
- Cluster has DRS activated. In this case you may want to temporarily deactivate External vCLS deployment to resolve some transient configuration or transient run-time issues. For example, you might want to deactivate External vCLS until VSAN datastore becomes available, in order to ensure that vCLS VMs are deployed on that VSAN datastore.
Embedded vCLS preserves the same functionality to activate or deactivate vCLS for a cluster.
vSphere DRS and vCLS VMs
vSphere DRS is a critical feature of vSphere which is required to maintan the health of the workloads running inside vSphere cluster. DRS depends on the availability of vCLS VMs.
If DRS is non-functional this does not mean that DRS is deactivated. vCLS health turns Unhealthy only in a DRS activated cluster when vCLS VMs are not running and the first instance of DRS is skipped because of this. vCLS health will stay Degraded on a non-DRS activated cluster when at least one vCLS VM is not running. vCenter has measures to avoid entering a degraded state by accident. For example, cluster Maintenance Mode recommendations do not include options that put all vCLS-capable hosts into Maintenance Mode at once.
Datastore selection for External vCLS
The datastore for vCLS VMs is automatically selected based on ranking all the datastores connected to the hosts inside the cluster.
A datastore is more likely to be selected if there are hosts in the cluster with free reserved DRS slots connected to the datastore. The algorithm tries to place vCLS VMs in a shared datastore if possible before selecting a local datastore. A datastore with more free space is preferred and the algorithm tries not to place more than one vCLS VM on the same datastore. You can only change the datastore of vCLS VMs after they are deployed and powered on.
If you want to move the VMDKs for vCLS VMs to a different datastore or attach a different storage policy, you can reconfigure vCLS VMs. A warning message is displayed when you perform this operation.
You can perform a storage vMotion to migrate vCLS VMs to a different datastore. You can tag vCLS VMs or attach custom attributes if you want to group them separately from workload VMs, for instance if you have a specific meta-data strategy for all VMs that run in a data center.
The enter maintenance mode task will start but cannot finish because there is 1 virtual machine residing on the datastore. You can always cancel the task in your Recent Tasks if you decide to continue.
The selected datastore might be storing vSphere Cluster Services VMs which cannot be powered off. To ensure the health of vSphere Cluster Services, these VMs have to be manually vMotioned to a different datastore within the cluster prior to taking this datastore down for maintenance. Refer to this KB article: KB 79892.Select the check box Let me migrate storage for all virtual machines and continue entering maintenance mode after migration. to proceed.
External vCLS Datastore Placement
You can override default vCLS VM datastore placement.
vSphere Cluster Services (vCLS) VM datastore location is chosen by a default datastore selection logic. To override the default vCLS VM datastore placement for a cluster, you can specify a set of allowed datastores by browsing to the cluster and clicking ADD under . Some datastores cannot be selected for vCLS because they are blocked by solutions like SRM or vSAN maintenance mode where vCLS cannot be configured. Users cannot add or remove Solution blocked datastores for vCLS VMs.
Monitoring vSphere Cluster Services
You can monitor the resources consumed by vCLS VMs and their health status.
vCLS VMs are not displayed in the inventory tree in the Hosts and Clusters tab. vCLS VMs from all clusters within a data center are placed inside a separate VMs and templates folder named vCLS. This folder and the vCLS VMs are visible only in the VMs and Templates tab of the vSphere Client. These VMs are identified by a different icon than regular workload VMs. You can view information about the purpose of the vCLS VMs in the Summary tab of the vCLS VMs.
You can monitor the resources consumed by vCLS VMs in the Monitor tab.
You can monitor the health status of vCLS in the Cluster Services portlet displayed in the Summary tab of the cluster.
Status | Color Coding | Summary |
---|---|---|
Healthy | Green | If there is at least one vCLS VM running, the status remains healthy, regardless of the number of hosts in the cluster. |
Degraded | Yellow | If there is no vCLS VM running for less than 3 minutes (180 seconds), the status is degraded. |
Unhealthy | Red | If there is no vCLS VM running for 3 minutes or more, the status is unhealthy in a DRS enabled cluster. |
Maintaining Health of vSphere Cluster Services
vCLS VMs are always powered-on because vSphere DRS depends on the availability of these VMs. These VMs should be treated as system VMs. Only administrators can perform selective operations on vCLS VMs. To avoid failure of cluster services, avoid performing any configuration or operations on the vCLS VMs.
vCLS VMs are protected from accidental deletion. Cluster VMs and folders are protected from modification by users, including administrators.
Only users which are part of the Administrators SSO group can perform the following operations:
- ReadOnly access for vCLS VMs
- Use tags and custom attributes for vCLS VMs
Operations that might disrupt the healthy functioning of vCLS VMs:
- Changing the power state of the vCLS VMs
- Resource reconfiguration of the vCLS VMs such as changing CPU, Memory, Disk size, Disk placement
- VM encryption
- Triggering vMotion of the vCLS VMs
- Changing the BIOS
- Removing the vCLS VMs from the inventory
- Deleting the vCLS VMs from disk
- Enabling FT of vCLS VMs
- Cloning vCLS VMs
- Configuring PMem
- Moving vCLS VM to a different folder
- Renaming the vCLS VMs
- Renaming the vCLS folders
- Enabling DRS rules and overrides on vCLS VMs
- Enabling HA admission control policy on vCLS VMs
- Enabling HA overrides on vCLS VMs
- Moving vCLS VMs to a resource pool
- Recovering vCLS VMs from a snapshot
When you perform any disruptive operation on the vCLS VMs, a warning dialog box appears.
Troubleshooting:
The health of vCLS VMs, including power state, is managed by VMware ESX Agent Manager and Workload Control Plane services. In case of power on failure of vCLS VMs, or if the first instance of DRS for a cluster is skipped due to lack of quorum of vCLS VMs, a banner appears in the cluster summary page along with a link to a Knowledge Base article to help troubleshoot the error state.
Because vCLS VMs are treated as system VMs, you do not need to backup or snapshot these VMs. The health state of these VMs is managed by vCenter Server services.
Putting a Cluster in Retreat Mode
When a datastore is placed in maintenance mode, if the datastore hosts vCLS VMs, you must manually storage vMotion the vCLS VMs to a new location or put the cluster in retreat mode.
Procedure
- Login to the vSphere Client.
- Navigate to the cluster on which vCLS must be deactivated.
- Navigate to the vCenter Server Configure tab.
- Under Configuration, select General.
- Select either the default System Managed option or Retreat Mode, which deactivates vCLS.
- Click OK.
Results
vSphere HA does not perform optimal placement during a host failure scenario. HA depends on DRS for placement recommendations. HA will still power on the VMs but these VMs might be powered on in a less optimal host.
Retrieving Password for External vCLS
You can retrieve the password to login to the vCLS VMs.
To ensure cluster services health, avoid accessing the vCLS VMs. This document is intended for explicit diagnostics on vCLS VMs.
Procedure
Results
With the retrieved password, you can log into the vCLS VMs.
vCLS VM Anti-Affinity Policies
vSphere supports anti-affinity between vCLS VMs and another group of workload VMs.
Compute policies provide a way to specify how the vSphere Distributed Resource Scheduler (DRS) should place VMs on hosts in a resource pool. Use the vSphere Compute Policies editor to create and delete compute policies. You can create or delete, but not modify, a compute policy. If you delete a category tag used in the definition of the policy, the policy is also deleted. Open the VM Summary page in vSphere to view the compute policies that apply to a VM and its compliance status with each policy. You can create a compute policy for a group of workload VMs that is anti-affine to the group of vCLS VMs. A vCLS anti-affinity policy can have a single user visible tag for a group of workload VMs, and the other group of vCLS VMs is internally recognized.
Create or Delete a vCLS VM Anti-Affinity Policy
A vCLS VM anti-affinity policy describes a relationship between a category of VMs and vCLS system VMs.
A vCLS VM anti-affinity policy discourages placement of vCLS VMs and application VMs on the same host. This kind of policy can be useful when you do not want vCLS VMs and virtual machines running critical workload to run on the same host. Some best practices for running critical workloads such as SAP HANA require dedicated hosts. After the policy is created, the placement engine attempts to place vCLS VMs on the hosts where policy VMs are not running.
- If the policy applies to multiple VMs on different hosts and it is not possible to have enough hosts to distribute vCLS VMs, vCLS VMs are consolidated into the hosts without policy VMs.
- If a provisioning operation specifies a destination host, that specification is always honored even if it violates the policy. DRS will try to move the vCLS VMs to a compliant host in a subsequent remediation cycle.
Procedure
- Create a category and tag for each group of VMs that you want to include in a vCLS VM anti-affinity policy.
- Tag the VMs that you want to include.
- Create a vCLS VM anti-affinity policy.
- From the vSphere, click .
- Click Add to open the New Compute Policy Wizard.
- Fill in the policy Name and choose vCLS VM anti affinity from the Policy type drop-down control.
The policy Name must be unique.
- Provide a Description of the policy, then use VM tag to choose the Category and Tag to which the policy applies.
Unless you have multiple VM tags associated with a category, the wizard fills in the VM tag after you select the tag Category.
- Click Create to create the policy.
- (Optional) To delete a compute policy, open vSphere, click to show each policy as a card. Click DELETE to delete a policy.