To support a large number of clients connecting to a public listening port, Service Providers running vCloud Availability for vCloud Director can deploy multiple instances of Cloud Proxy and a load balancer to distribute requests to a public port across Cloud Proxy instances.

To-the-Cloud traffic setup has the following specifications .

  • All Cloud Proxy instances must use the same persistence database as the rest of the vCloud Director instances

  • All Cloud Proxy instances must use the same transfer share as the rest of the vCloud Director instances

  • All Cloud Proxy instances must use the same NTP time source as the rest of the components (recommended to use an internal NTP source )

  • The load balancer exposes a single public port

  • Public ports on the Cloud Proxy instances are not exposed to internet

  • Cloud Proxy instances do not require to be aware of the load balancer

The Cloud Proxy load balancer can be the same load balancer that is used to distribute REST API requests among vCloud Director instances. In case of high load, a best practice is to have dedicated load balancers for each replication direction, as the traffic can come from Internet and from the cloud .

Figure 1. Cloud Proxy Load Balancing Deployment Model

Public Cloud Proxy endpoint (URI) for to-the-cloud tunnel termination and internal IP address for from-the-cloud traffic (used by ESXi host-based replication) must be specifically configured in vCloud Director using the vCloud Director API call . For more information about Cloud Proxy configuration, see Create Cloud Proxy.