The VMware Cloud Director placement engine determines on which resources, including resource pools, datastores, and networks or network pools, to place the virtual machines (VMs) in a vApp. Based on each VM requirements, the engine makes the placement decision independently for each VM in a vApp.

The placement engine runs in the following scenarios.
Note: VMware Cloud Director places the VMs inside a vApp independently based on the requirements for each VM.
  • When you create a VM, the placement engine determines on which resource pool, datastore, and network pool to place it.
  • When you start a VM, if the VM fails to power on, VMware Cloud Director can selectively move the VM to another resource pool, datastore, or network pool.
  • When you edit a VM changing its datastore, resource, or network configurations, VMware Cloud Director might move the VM to a different datastore and resource pool that support the new VM settings. VMware Cloud Director moves a VM only when the current resources cannot support the new requirements.
  • When you migrate VMs to different resource pools.
  • When an organization virtual data center (VDC) discovers VMs created in any vCenter Server resource pool that backs the VDC, and the system constructs a simplified vApp to contain each discovered VM.
The placement engine uses the following criteria to select candidate resource pools for a VM.
  • CPU capacity
  • Memory capacity
  • Number of virtual CPUs
  • Hardware version supported by the host and allowed by the provider VDC
  • Affinity rules

The placement engine filters out deactivated resource pools from the candidate list. When possible, VMware Cloud Director places VMs on the same host cluster as other VMs in the organization VDC.

The placement engine uses the following criteria to select candidate datastores for VMs.
  • Storage capacity and thresholds
  • Storage policies
  • Affinity requirements between VMs
  • If IOPS is activated, IOPS capacity and VM disks IOPS
There are two datastore thresholds in VMware Cloud Director.
  • Red threshold - the amount of free space on a datastore, below which, VMware Cloud Director filters out the datastore during the placement of any entity such as a VM, a template, or a disk.

    When a datastore reaches its red threshold, the workload placement engine stops placing new VMs on the datastore except while importing VMs from vCenter Server. In the case of VM import, if the vCenter Server VM is already present on the red threshold datastore, the placement engine prefers the existing datastore.

    The workload placement engine uses the red threshold for all workflows. When making a request for any new placement, the placement engine first filters out any datastores or storage pods which have breached the red threshold. When making a placement request for an existing entity, if the disks are residing on the datastores that are breaching the red threshold, VMware Cloud Director relocates the disks to other available datastores. Then, the engine selects a datastore out of the remaining datastores or storage pods, either through the selector logic of VMware Cloud Director or from the vSphere Storage DRS recommendations.

  • Yellow threshold - the amount of free space on the datastore, below which VMware Cloud Director filters out the datastore during the placement of shadow VMs from which VMware Cloud Director creates fast-provisioned VMs. For more information on shadow VMs, see Fast Provisioning of Virtual Machines.

    The yellow threshold does not apply to the linked clones that VMware Cloud Director uses for fast provisioning of VMs. When the placement engine selects a datastore for a linked clone, if the selected datastore is missing a shadow VM, VMware Cloud Director creates a shadow VM on the datastore. The threshold does not apply to the shadow VM in this case.

    The yellow threshold applies only to the periodic background job creating shadow VMs. If activated, the job runs every 24 hours and uses eager VM creation on each datastore for a given hub and storage policy pair. To activate the job for eager provisioning of shadow VMs, you must set the following property to true.
    Note: The periodic background job creates shadow VMs on all datastores for all templates. The job increases the storage consumption even when you are not using the datastores or shadow VMs.

In most cases, the placement engine filters out deactivated and red threshold datastores from the candidate list. The engine does not filter out these datastores when you import a VM from vCenter Server.

When implementing the threshold logic, VMware Cloud Director does not evaluate the requirements of the current placement subject. For the workload placement engine to place a subject on a datastore, the available space in bytes must be more than the threshold in bytes. For example, for a datastore with available capacity of 5 GB with a red threshold set at 4 GB, the placement engine can place a VM with a requirement for 2 GB. If the VM creation breaches the threshold, the placement engine filters out the datastore for further placements.

The placement engine uses the network name to select candidate network pools for a vApp and its VMs.

After the placement engine selects a set of candidate resources, it ranks the resources and picks the best location for each VM based on the CPU, virtual RAM, and storage configuration of each VM.

While ranking resources, the placement engine examines the current and estimated future resource use. Estimated future use is calculated based on powered-off VMs currently placed on a given resource pool and their expected use after they are powered on. For CPU and memory, the placement engine considers the current unreserved capacity, the maximum use, and the estimated future unreserved capacity. For storage, the engine considers the aggregate provisioned capacity provided by the cluster that each resource pool belongs to. The placement engine then considers the weighted metrics of the current and future suitability of each resource pool.

When a move is necessary, the placement engine favors resource pools with the most unreserved capacity for CPU, memory, and storage. It also gives lower preference to yellow clusters so that yellow clusters are only selected if no healthy cluster is available that satisfies the placement criteria. When importing a VM from vCenter Server, if the VM placement is satisfactory, to minimize movement, the engine ignores the thresholds.

When you power on a VM, VMware Cloud Director tries to power it on in its current location. If vCenter Server reports an error with the host CPU and memory capacity, VMware Cloud Director tries the resource pool twice before trying to move the VM to the other compatible resource pools on the organization VDC. While rerunning the placement engine to attempt to find a compatible resource pool, VMware Cloud Director excludes previous tried and failed resource pools. If no suitable resource pools are connected to the datastore the VMDKs are located on, moving a VM to another resource pool can cause the migration of the VM's VMDKs to a different datastore. If the VM placement fails in all locations that satisfy the requirements of the VM, VMware Cloud Director returns an error message that the placement is not feasible. If there is an affinity to the current datastore and the datastore is unavailable, the placement engine returns an error that the placement is not feasible. This is a normal state of the system when it operates near full capacity and the proposed solution does not meet all the requirements at the current time. To remediate the error, you can add or free up resources and initiate a retry. When there is no specific datastore required, the placement engine selects a datastore in the candidate host cluster or resource pool that fulfills the other requirements such as storage policy, storage capacity, and IOPS capacity.

During concurrent deployment situations when a resource pool is close to capacity, the validation of that resource pool might succeed even though the resource pool lacks the resources to support the VM. In these cases, the VM cannot power on. If a VM fails to power on in this situation and there is more than one resource pool backing the VDC, to prompt VMware Cloud Director to migrate the VM to a different resource pool, start the power on operation again.

When the cluster that a resource pool belongs to is close to capacity, a VM on that resource pool might not be able to power-on if no individual host has the capacity to power on the VM. This happens as a result of capacity fragmentation at the cluster level. In such cases, a system administrator must migrate a few VMs out of the cluster so that the cluster maintains sufficient available capacity.

VM Placement Engine Algorithm

The placement algorithm picks a host cluster from the list of host clusters that have required storage profiles available and satisfy any existing VM-VM, VM-host affinity, or anti-affinity rules. VMware Cloud Director calculates the placement solution through various scores. To change the behavior of the engine, you can use the cell management tool to modify the configurable parameters that begin with the underscore (_) symbol.
  1. For each host cluster the workload placement engine calculates a capacityScore, futureCapacityScore, and reservationScore. The placement engine calculates each score separately for CPU, memory, and storage.
    capacityScore: (not available in some cases ) 
    CPU = (cpuUnreservedCapacityMHz - (cupBurstMHz * _cpuBurstRatio)) / cpuRequirementMHz 
    Memory = (memoryUnreservedCapacityMB - (memBurstMB * _memoryBurstRatio)) / memRequirementMB 
    Storage = storageFreeCapacityMB / stgRequirementMB 
    futureCapacityScore (not available in some cases) 
    CPU = (cpuUnreservedCapacityMHz - (cpuUndeployedReservationMHz * _futureDeployRatio)) / cpuRequirementMHz 
    Memory = (memoryUnreservedCapacityMB - (memUndeployedReservationMB * _futureDeployRatio)) / memRequirementMB 
    Storage = storageFreeCapacityMB / stgRequirementMB 
    reservationScore: (used for capacityScore and futureCapacityScore when those scores are unavailable) 
    CPU = cpuUnreservedCapacityMHz / cpuRequirementMHz 
    Memory = memoryUnreservedCapacityMB / memRequirementMB 
    Storage = storageFreeCapacityMB / stgRequirementMB 
  2. For each host cluster, the placement engine calculates a weightedCapacityScore for CPU, memory, and storage.
    weightedCapacityScore = capacityScore * _currentScoreWeight + futureCapacityScore * (1 - _currentScoreWeight)

    Each weightedCapacityScore is a ratio from 0 through 1 with higher values representing more available resources. weightedCapacityScore values can be compared across different resource types, for example, CPU, memory, and storage, because they represent a unit-less measure of availability. Higher weightedCapacityScore means more availability of the corresponding resource in the host cluster.

  3. The placement engine verifies that there are enough resources for CPU, memory, and storage.
    totalAvailable * _[memory|cpu|storage]headRoom < free / UnreservedCapacity
  4. The placement engine sorts the list of host clusters on the weightedCapacityScore so that the least constrained host cluster is first and the most constrained host cluster is last.
  5. The placement engine processes each host cluster in the list.
    • If the host cluster must be avoided, for example, because of anti-affinity rule, the engine adds it to avoidHubList.
    • If the host cluster does not have enough additional resources, the engine adds it to noHeadRoomHubList.
    • If the host cluster is preferred, for example, because of a strong affinity rule or the current host cluster, the engine adds it to preferredHubList.
    • All other host clusters go to acceptableHubList.

    Within each list, the most preferred host cluster is first and the least preferred host cluster is last.

  6. The engine integrates the four lists.

    preferredHubList + acceptableHubList + noHeadRoomHubList + avoidHubList

    The engine orders the resulting list from the most preferable to the least preferable host cluster.

  7. The placement engine picks the top host cluster from the list as the target hub.

Adjustable Parameters

To influence various selection algorithm thresholds, there are several parameters that you can customize. However, only service provider administrators with advanced knowledge of the VMware Cloud Director operations might attempt changing these parameters from their default values because it might produce undesirable and unexpected results. Test any parameter changes in a non-production environment first.

You can customize the following parameters by using the Cell Management Tool.

Parameter Description
vcloud.placement.ranking.currentScoreWeight The relative importance of the current component of the host cluster score. The value must be in [0.1]. When the value is 0, the engine ranks the host cluster only based on future score. When the value is 1, the engine ranks the host cluster only based on the current score. The default is 0.5.



The percentage of allocation beyond reservation of a VM that the ranker uses to estimate the load of the cluster. The value is from 0 through 1. 0 means a VM only uses its reservation. 1 means the VM is fully busy. The default is 0.67.
vcloud.placement.ranking.futureDeployRatio The percentage of VMs on this host cluster that are expected to be deployed and to consume memory and CPU. The value is from 0 through 1. The default is 0.5.




These parameters give the control for leaving additional resources for growth on a host cluster. The engine defines the headroom as a ratio of unreserved resources. For example, if vcloud.placement.ranking.memoryHeadRoom is 0.2, after a host cluster has less than 20% resources available, the engine considers it as a cluster with insufficient memory headroom and will rank it lower than other host clusters. The value must be from 0 through 1. The default is 0.2.

./cell-management-tool manage-config -n vcloud.placement.ranking.memoryHeadRoom -v 0.3