Depending on your needs, you can have different configurations of your VMware Cloud Director appliance based server group and different sizes of the VMware Cloud Director virtual appliance instances.


To ensure that the cluster can support an automated failover if a primary cell failure occurs, the minimal VMware Cloud Director deployment must consist of one primary and two standby cells. The environment remains available under any failure scenario where one of the cells goes offline for any reason. If a standby failure occurs, until you redeploy the failed cell, the cluster operates in a fully functional state with some performance degradation. See Appliance Deployments and Database High Availability Configuration.

The VMware Cloud Director appliance has four sizes that you can select during the deployment: Small, Medium, Large, and Extra Large (VVD). The Small appliance size is suitable for lab evaluation and this document does not provide guidance on the Small appliance configuration. The sizing options table provides the specifications for the remaining options and the most suitable use cases for a production environment. The Extra Large configuration matches the VMware Validated Designs (VVD) for Cloud Providers scale profile.

To create larger custom sizes, system administrators can adjust the size of the deployed cells.

The smallest recommended configuration for production deployments is a three-node deployment of Medium size virtual appliances.

Important: VMware does not provide support for VMware Cloud Director appliance deployments without database HA.

VMware Cloud Director Appliance Sizing Options

You can use the following decision guide to estimate the appliance size for your environment.

Medium Large Extra Large (VVD)
Recommended use cases Lab or small production environments Production environment Production with API integrations and monitoring
vRealize Operations Management Pack deployment in the VMware Cloud Director environment No No Yes
Cassandra VM metrics enablement in VMware Cloud Director No No Yes
Approximate number of concurrent users or clients accessing the API over a peak 30 minute period. < 50 < 100 < 100
Managed VMs 5000 5000 15000

Configuration Definitions

Medium Large Extra Large (VVD)
HA cluster configuration 1 primary + 2 standby cells 1 primary + 2 standby + 1 application cells 1 primary + 2 standby + 2 application cells
Primary or standby cell vCPUs 8 16 24
Application cell vCPUs N/A 8 8
Primary or standby cell RAM 16 GB 24 GB 32 GB
Application cell RAM N/A 8 8
vCPU to physical core ratio 1:1 1:1 1:1
Minimum disk space for each appliance in the cluster 112 GB 112 GB 112 GB

How to Detect If Your System Is Undersized

In a VMware Cloud Director cell, the CPU or memory use grows and reaches a plateau at a high level, that is, a level near capacity. The VMware Cloud Director cell might also lose the connection to the database.

How to Detect If Your System Number of Cells Are Insufficient

In the vcloud-container-debug.log and cell-runtime.log files of any of the VMware Cloud Director cells, you see entries similar to org.apache.tomcat.jdbc.pool.PoolExhaustedException: [pool-jetty-XXXXX] Timeout: Pool empty. Unable to fetch a connection in 20 seconds, none available. The VMware Cloud Director cell might also lose the connection to the database.

Based on the default database connection configuration, all configurations are limited to a maximum of 6 cells of primary, standby and application type.

How to Customize the Appliance Sizing

There are two ways to customize the sizing of the VMware Cloud Director appliance to a custom configuration after running the vpostgres-reconfigure service appliance deployer.

  • Customize the appliance sizing by using the vpostgres-reconfigure service.
  • Customize the appliance sizing by manually updating the file.

To customize a VMware Cloud Director appliance sizing by using the vpostgres-reconfigure service, you can edit the VM hardware settings in the vSphere Client. Every time the appliance starts, the vpostgres-reconfigure service runs and modifies the PostgreSQL settings to match the VM size.

Note: The vpostgres-reconfigure service does not modify any previous manual customizations.

If you want to make a manual customization, you can edit the file. The manual customization takes precedence over the vpostgres-reconfigure service customization. To customize manually the appliance sizing, follow this procedure on all cells.

  1. Log in directly or by using an SSH client to the OS of the primary appliance as root.
  2. To view and take note of the vCPU information, run the following command.
    grep -c processor /proc/cpuinfo
  3. To view and take note of the RAM information, run the following command.

    The RAM reported below is in KB and you must convert it to GB by dividing by 1024000.

    cat /proc/meminfo | grep MemTotal | cut -dk -f1 | awk '{print int($2/1024000)}'
  4. Calculate the shared_buffers value to be one-fourth of the total RAM minus 4 GB.

    shared_buffers = 0.25 * (total RAM - 4 GB)

  5. Calculate the effective_cache_size value to be three-fourths of the total RAM minus 4 GB.

    effective_cache_size = 0.75 * (total RAM - 4 GB)

  6. Calculate the max_worker_processes value to be the number of vCPUs.

    The default and minimum value is 8.

  7. Change the user to postgres.
    sudo -i -u postgres
  8. Update the configuration file by running the following commands and substituting the calculated values.
    psql -c "ALTER SYSTEM set shared_buffers = 'shared_buffers value';"
    psql -c "ALTER SYSTEM set effective_cache_size =  'effective_cache_size value';"
    psql -c "ALTER SYSTEM set work_mem = '8MB';"
    psql -c "ALTER SYSTEM set maintenance_work_mem = '1GB';"
    psql -c "ALTER SYSTEM set max_worker_processes= 'max_worker_processes value';"
  9. Return to the root user by running the exit command.
  10. Restart the vpostgres process.
    systemctl restart vpostgres
  11. Change the user to postgres again.
    sudo -i -u postgres
  12. For each standby node copy the file to the node and restart the vpostgres process.
    1. Copy from the primary node to the standby node.
      scp /var/vmware/vpostgres/current/pgdata/ postgres@standby-node-address:/var/vmware/vpostgres/current/pgdata/
    2. Restart the vpostgres process.
      systemctl restart vpostgres
To remove any manual customizations and continue using the vpostgres-reconfigure service, change the user to postgres and run the following commands.
psql -c "ALTER SYSTEM reset shared_buffers;"
    psql -c "ALTER SYSTEM reset effective_cache_size;"
    psql -c "ALTER SYSTEM reset max_worker_processes;"