TAS for VMs uses a blobstore to store the source code that enables you to push, stage, and run.
This topic references staging and treats all blobstores as generic object stores.
For more information about staging, see How Apps Are Staged.
For more information about how specific third-party blobstores can be configured, see Configuring File Storage for TAS for VMs.
This section describes how staging buildpack apps uses the blobstore.
The following diagram illustrates how the staging process uses the blobstore. To walk through the same diagram in an app staging context, see How Diego Stages Buildpack Apps.
The process in which the staging process uses the blobstore is as follows:
cf push: A developer runs
Create app: The Cloud Foundry Command Line Interface (cf CLI) gathers local source code files and computes a checksum of each.
Store app metadata: The cf CLI makes a
resource_matches request, which matches resources to Cloud Controller. The request lists file names and their checksums. For more information and an example API request, see Resource Matches in the TAS for VMs API documentation.
Check file existence includes the following:
Upload unmatched files: The cf CLI compresses and uploads the unmatched files to Cloud Controller.
Download cached files: Cloud Controller downloads, to its local disk, the matched files that are cached in the blobstore.
Upload complete package includes the following:
Download package & buildpack(s): A Diego Cell downloads the package and its buildpacks into a container and stages the app.
Upload droplet includes the following:
Download droplet includes the following:
The load that Cloud Controller generates on its blobstore is unique due to resource matching. Many blobstores that perform well under normal read, write, and delete loada are overwhelmed by Cloud Controller’s heavy use of HEAD requests during resource matching.
Pushing an app with large number of files causes Cloud Controller to check the blobstore for the existence of each file.
Parallel BOSH deployments of Diego Cells can also generate significant read load on the Cloud Controller blobstore as the cells perform evacuation. For more information, see the Evacuation section of the App Container Lifecycle topic.
As new droplets and packages are created, the oldest ones associated with an app are marked as
EXPIRED if they exceed the configured limits for packages and droplets stored per app.
Each night, starting at midnight, Cloud Controller runs a series of jobs to delete the data associated with expired packages, droplets, and buildpacks.
Enabling the native versioning feature on your blobstore increases the number of resources stored and costs. For more information, see Using Versioning in the AWS documentation.
Cloud Controller inherits default blobstore operation timeouts from Excon. Excon defaults to 60 second read, write, and connect timeouts.
For more information, see the excon repository on GitHub.