The VMware Cloud Director placement engine determines on which resources, including resource pools, datastores, and networks or network pools, to place the virtual machines (VMs) in a vApp. Based on each VM requirements, the engine makes the placement decision independently for each VM in a vApp.

The placement engine runs in the following scenarios.
Note: VMware Cloud Director places the VMs inside a vApp independently based on the requirements for each VM.
  • When you create a VM, the placement engine determines on which resource pool, datastore, and network pool to place it.
  • When you start a VM, if the VM fails to power on, VMware Cloud Director can selectively move the VM to another resource pool, datastore, or network pool.
  • When you edit a VM changing its datastore, resource, or network configurations, VMware Cloud Director might move the VM to a different datastore and resource pool that support the new VM settings. VMware Cloud Director moves a VM only when the current resources cannot support the new requirements.
  • When you migrate VMs to different resource pools.
  • When an organization virtual data center (VDC) discovers VMs created in any vCenter resource pool that backs the VDC, and the system constructs a simplified vApp to contain each discovered VM.
The placement engine uses the following criteria to select candidate resource pools for a VM.
  • CPU capacity
  • Memory capacity
  • Number of virtual CPUs
  • Hardware version supported by the host and allowed by the provider VDC
  • Affinity rules

The placement engine filters out deactivated resource pools from the candidate list. When possible, VMware Cloud Director places VMs on the same host cluster as other VMs in the organization VDC.

Important: VM-Host affinity rules must have a condition that the rule must run on hosts in group. VM-Host anti-affinity rules must have a condition that the rule must not run on hosts in group.
The placement engine uses the following criteria to select candidate datastores for VMs.
  • Storage capacity and thresholds
  • Storage policies
  • Affinity requirements between VMs
  • If IOPS is activated, IOPS capacity and VM disks IOPS
There are two datastore thresholds in VMware Cloud Director.
  • Red threshold - the amount of free space on a datastore, below which, VMware Cloud Director filters out the datastore during the placement of any entity such as a VM, a template, or a disk.

    When a datastore reaches its red threshold, the workload placement engine stops placing new VMs on the datastore except while importing VMs from vCenter. In the case of VM import, if the vCenter VM is already present on the red threshold datastore, the placement engine prefers the existing datastore.

    The workload placement engine uses the red threshold for all workflows. When making a request for any new placement, the placement engine first filters out any datastores or storage pods which have breached the red threshold. When making a placement request for an existing entity, if the disks are residing on the datastores that are breaching the red threshold, VMware Cloud Director relocates the disks to other available datastores. Then, the engine selects a datastore out of the remaining datastores or storage pods, either through the selector logic of VMware Cloud Director or from the vSphere Storage DRS recommendations.

  • Yellow threshold - the amount of free space on the datastore, below which VMware Cloud Director filters out the datastore during the placement of shadow VMs from which VMware Cloud Director creates fast-provisioned VMs. For more information on shadow VMs, see Fast Provisioning of Virtual Machines.

    The yellow threshold does not apply to the linked clones that VMware Cloud Director uses for fast provisioning of VMs. When the placement engine selects a datastore for a linked clone, if the selected datastore is missing a shadow VM, VMware Cloud Director creates a shadow VM on the datastore. The threshold does not apply to the shadow VM in this case.

    The yellow threshold applies only to the periodic background job creating shadow VMs. If activated, the job runs every 24 hours and uses eager VM creation on each datastore for a given hub and storage policy pair. To activate the job for eager provisioning of shadow VMs, you must set the following property to true.
    valc.catalog.fastProvisioning=true
    Note: The periodic background job creates shadow VMs on all datastores for all templates. The job increases the storage consumption even when you are not using the datastores or shadow VMs.

In most cases, the placement engine filters out deactivated and red threshold datastores from the candidate list. The engine does not filter out these datastores when you import a VM from vCenter.

When implementing the threshold logic, VMware Cloud Director does not evaluate the requirements of the current placement subject. For the workload placement engine to place a subject on a datastore, the available space in bytes must be more than the threshold in bytes. For example, for a datastore with available capacity of 5 GB with a red threshold set at 4 GB, the placement engine can place a VM with a requirement for 2 GB. If the VM creation breaches the threshold, the placement engine filters out the datastore for further placements.

The placement engine uses the network name to select candidate network pools for a vApp and its VMs.

After the placement engine selects a set of candidate resources, it ranks the resources and picks the best location for each VM based on the CPU, virtual RAM, and storage configuration of each VM.

While ranking resources, the placement engine examines the current and estimated future resource use. Estimated future use is calculated based on powered-off VMs currently placed on a given resource pool and their expected use after they are powered on. For CPU and memory, the placement engine considers the current unreserved capacity, the maximum use, and the estimated future unreserved capacity. For storage, the engine considers the aggregate provisioned capacity provided by the cluster that each resource pool belongs to. The placement engine then considers the weighted metrics of the current and future suitability of each resource pool.

When a move is necessary, the placement engine favors resource pools with the most unreserved capacity for CPU, memory, and storage. It also gives lower preference to yellow clusters so that yellow clusters are only selected if no healthy cluster is available that satisfies the placement criteria. When importing a VM from vCenter, if the VM placement is satisfactory, to minimize movement, the engine ignores the thresholds.

When you power on a VM, VMware Cloud Director tries to power it on in its current location. If vCenter reports an error with the host CPU and memory capacity, VMware Cloud Director tries the resource pool twice before trying to move the VM to the other compatible resource pools on the organization VDC. While rerunning the placement engine to attempt to find a compatible resource pool, VMware Cloud Director excludes previous tried and failed resource pools. If no suitable resource pools are connected to the datastore the VMDKs are located on, moving a VM to another resource pool can cause the migration of the VM's VMDKs to a different datastore. If the VM placement fails in all locations that satisfy the requirements of the VM, VMware Cloud Director returns an error message that the placement is not feasible. If there is an affinity to the current datastore and the datastore is unavailable, the placement engine returns an error that the placement is not feasible. This is a normal state of the system when it operates near full capacity and the proposed solution does not meet all the requirements at the current time. To remediate the error, you can add or free up resources and initiate a retry. When there is no specific datastore required, the placement engine selects a datastore in the candidate host cluster or resource pool that fulfills the other requirements such as storage policy, storage capacity, and IOPS capacity.

During concurrent deployment situations when a resource pool is close to capacity, the validation of that resource pool might succeed even though the resource pool lacks the resources to support the VM. In these cases, the VM cannot power on. If a VM fails to power on in this situation and there is more than one resource pool backing the VDC, to prompt VMware Cloud Director to migrate the VM to a different resource pool, start the power on operation again.

When the cluster that a resource pool belongs to is close to capacity, a VM on that resource pool might not be able to power-on if no individual host has the capacity to power on the VM. This happens as a result of capacity fragmentation at the cluster level. In such cases, a system administrator must migrate a few VMs out of the cluster so that the cluster maintains sufficient available capacity.

VM Placement Engine Algorithm

The placement algorithm picks a host cluster from the list of host clusters that have required storage profiles available and satisfy any existing VM-VM, VM-host affinity, or anti-affinity rules. VMware Cloud Director calculates the placement solution through various scores. To change the behavior of the engine, you can use the cell management tool to modify the configurable parameters that begin with the underscore (_) symbol.
  1. For each host cluster the workload placement engine calculates a capacityScore, futureCapacityScore, and reservationScore. The placement engine calculates each score separately for CPU, memory, and storage.
    capacityScore: (not available in some cases ) 
    
    CPU = (cpuUnreservedCapacityMHz - (cupBurstMHz * _cpuBurstRatio)) / cpuRequirementMHz 
    Memory = (memoryUnreservedCapacityMB - (memBurstMB * _memoryBurstRatio)) / memRequirementMB 
    Storage = storageFreeCapacityMB / stgRequirementMB 
    
    futureCapacityScore (not available in some cases) 
    
    CPU = (cpuUnreservedCapacityMHz - (cpuUndeployedReservationMHz * _futureDeployRatio)) / cpuRequirementMHz 
    Memory = (memoryUnreservedCapacityMB - (memUndeployedReservationMB * _futureDeployRatio)) / memRequirementMB 
    Storage = storageFreeCapacityMB / stgRequirementMB 
    
    reservationScore: (used for capacityScore and futureCapacityScore when those scores are unavailable) 
    
    CPU = cpuUnreservedCapacityMHz / cpuRequirementMHz 
    Memory = memoryUnreservedCapacityMB / memRequirementMB 
    Storage = storageFreeCapacityMB / stgRequirementMB 
  2. For each host cluster, the placement engine calculates a weightedCapacityScore for CPU, memory, and storage.
    weightedCapacityScore = capacityScore * _currentScoreWeight + futureCapacityScore * (1 - _currentScoreWeight)

    Each weightedCapacityScore is a ratio from 0 through 1 with higher values representing more available resources. weightedCapacityScore values can be compared across different resource types, for example, CPU, memory, and storage, because they represent a unit-less measure of availability. Higher weightedCapacityScore means more availability of the corresponding resource in the host cluster.

  3. The placement engine verifies that there are enough resources for CPU, memory, and storage.
    totalAvailable * _[memory|cpu|storage]headRoom < free / UnreservedCapacity
  4. The placement engine sorts the list of host clusters on the weightedCapacityScore so that the least constrained host cluster is first and the most constrained host cluster is last.
  5. The placement engine processes each host cluster in the list.
    • If the host cluster must be avoided, for example, because of anti-affinity rule, the engine adds it to avoidHubList.
    • If the host cluster does not have enough additional resources, the engine adds it to noHeadRoomHubList.
    • If the host cluster is preferred, for example, because of a strong affinity rule or the current host cluster, the engine adds it to preferredHubList.
    • All other host clusters go to acceptableHubList.

    Within each list, the most preferred host cluster is first and the least preferred host cluster is last.

  6. The engine integrates the four lists.

    preferredHubList + acceptableHubList + noHeadRoomHubList + avoidHubList

    The engine orders the resulting list from the most preferable to the least preferable host cluster.

  7. The placement engine picks the top host cluster from the list as the target hub.

Adjustable Parameters

To influence various selection algorithm thresholds, there are several parameters that you can customize. However, only service provider administrators with advanced knowledge of the VMware Cloud Director operations might attempt changing these parameters from their default values because it might produce undesirable and unexpected results. Test any parameter changes in a non-production environment first.

You can customize the following parameters by using the Cell Management Tool.

Parameter Description
vcloud.placement.ranking.currentScoreWeight The relative importance of the current component of the host cluster score. The value must be in [0.1]. When the value is 0, the engine ranks the host cluster only based on future score. When the value is 1, the engine ranks the host cluster only based on the current score. The default is 0.5.

vcloud.placement.ranking.memoryBurstRatio

vcloud.placement.ranking.cpuBurstRatio

The percentage of allocation beyond reservation of a VM that the ranker uses to estimate the load of the cluster. The value is from 0 through 1. 0 means a VM only uses its reservation. 1 means the VM is fully busy. The default is 0.67.
vcloud.placement.ranking.futureDeployRatio The percentage of VMs on this host cluster that are expected to be deployed and to consume memory and CPU. The value is from 0 through 1. The default is 0.5.

vcloud.placement.ranking.memoryHeadRoom

vcloud.placement.ranking.cpuHeadRoom

vcloud.placement.ranking.storageHeadRoom

These parameters give the control for leaving additional resources for growth on a host cluster. The engine defines the headroom as a ratio of unreserved resources. For example, if vcloud.placement.ranking.memoryHeadRoom is 0.2, after a host cluster has less than 20% resources available, the engine considers it as a cluster with insufficient memory headroom and will rank it lower than other host clusters. The value must be from 0 through 1. The default is 0.2.

Example:
./cell-management-tool manage-config -n vcloud.placement.ranking.memoryHeadRoom -v 0.3 

Datastore Filters and Storage Placement Algorithm

VMware Cloud Director datastore filters and storage placement algorithm determine the placement of VM files and disks on the underlying storage resources.Storage containers represent the storage resources, either datastore or datastore cluster.

The datastore filters are part of the storage filter chain, which helps narrow down the eligible storage containers, based on the requirements of the placement subject. The filters use the available storage containers in a provider VDC as an input list. The filters run in a predefined sequence, and each filter passes the refined list of storage containers to the next filter. VMware Cloud Director skips the inapplicable filters. For example, for VMs without an IOPS setting, VMware Cloud Director does not run the IOPS filter.

Table 1. Datastore Filters
Filter Description
AffinityDatastoreFilter Filters the storage containers based on the affinity and anti-affinity rules defined for the datastores. VMware Cloud Director sets datastore affinity rules in the cases like VM import from vCenter, tenant migration, and so on.
AlreadyOnValidDatastoreFilter If a placement subject is already placed on a valid datastore, this filter rejects all other storage containers and retains only the valid datastore.
BadHostsFilter Filters out the storage containers that do not have at least one connected, operational, and powered on host. Filters out storage containers that have been deleted from the inventory.
DatastoreClusterFilter

Filters out all datastore clusters and the datastores that are part of a datastore cluster. This filter is used when the placement subject does not require datastore clusters. For example, if shared named disks are not allowed on the datastore clusters, based on config: vcloud.disk.shared.allowOnSpod.

DatastoreFsFilter Filters out all storage containers that are part of the specified file system. For example, if shared named disks are not allowed on NFS datastores, based on config: vcloud.disk.shared.allowOnNfs, then, the placement algorithm adds this filter to the chain.
DisabledDatastoreFilter Filters out all deactivated storage containers from the input list.
IopsCapacityDatastoreFilter

Filters out all datastores that do not have enough IOPS capacity for the VM.

The filter runs if the following prerequisites are met.

  • The VM has an IOPS setting.
  • There are no storage pods.
  • All datastores have iopsCapacities set.
  • The VM has activated VCD/IOPS capability of the storage policy and the Impact Placement option from the storage policy settings is activated.
LeastProvisionedFilter Singles out the least provisioned storage container.
LinkedCloneFilter Filters out all the datastores that do not have a source VM or corresponding shadow VMs on a datastore. Filters out any datastores that have source VMs with exceeded allowed maximum chain length for the virtual disks of the VMs.
MinFreeSpaceFilter Filters out any storage containers that do not contain enough free space for the placement subject. For storage pods, the maximum free space of the child datastores determines the free space and not the total free space of the child datastores or the free space of the storage pod. For example, if a storage pod has two datastores, respectively, with 3 GB and 5 GB free. VMware Cloud Director considers the free space of the storage pod to be 5 GB. If there are no such containers, the filter keeps the container with most free space.
MostFreeSpaceFilter Singles out the storage container with most free space.
StorageClassFilter Filters out any storage containers or storage pods that do not match the storage policy of the placement subject, defined in their requirement.
ThresholdFilter Filters out the storage containers that reach the specified threshold of available capacity. There are yellow and red thresholds in VMware Cloud Director. See Configure Low Disk Space Thresholds for a Provider Virtual Data Center Storage Container in Your VMware Cloud Director.
VirtualMachineFilter Filters out all storage containers that the specified VM does not have access to.

VMware Cloud Director passes the final list of filtered storage containers to the storage placement algorithm which tries to place the placement subject into the selected storage containers.

To place the set of placement subjects in the best possible datastore for each, the storage placement algorithm uses configurable criteria and the list of valid datastores. The storage placement algorithm uses the following workflow.
  1. Receives from the placement engine a set of placement subjects and a target resource pool that backs the provider VDC. There are two alternatives.

    • Usually, the algorithm receives a VM home file containing metadata and configuration information and a set of disks for regular, non-fast-provisioned VM placement.
    • The algorithm might use an aggregate disk approach for named disks and fast provisioned VMs. In other words, for each fast provisioned VM, the algorithm receives one requirement for the VM and all of its disks.
  2. Receives the set of storage containers eligible for that hub, which includes both datastores and storage pods.
  3. To filter out the storage containers that cannot fit a subject, for each placement subject, the algorithm runs through the static chain of placement filters. For example, if a VM disk cannot fit on a datastore, the algorithm marks the datastore as not eligible for the VM disk.
  4. Ranks the storage containers for each subject in order of their preference following consecutively the subsequent considerations.
    1. Whether the container is as storage pod or a datastore
    2. How many other placement subjects contain the storage pod as a valid container
    3. Size of the container
  5. If any of the containers is a storage pod, to reduce the storage pods down to datastores, the algorithm runs the vSphere Storage DRS invocation algorithm.
  6. After inputting the placement subjects and their set of valid datastores, determines where the placement subject must reside.
  7. Returns a final placement result that determines the datastores on which each placement subject must reside, or returns an error that the VMware Cloud Director algorithm cannot find a suitable placement.