This topic tells you about the components that form and interact with the Diego system in VMware Tanzu Application Service for VMs (TAS for VMs).

TAS for VMs uses the Diego system to manage app containers. Diego components assume app scheduling and management responsibility from the Cloud Controller.

Diego is a self-healing container management system that attempts to keep the correct number of instances running in Diego cells to avoid network failures and crashes. Diego schedules and runs Tasks and Long-Running Processes (LRP). For more information about Tasks and LRPs, see How the Diego Auction Allocates Jobs.

You can submit, update, and retrieve the desired number of Tasks and LRPs using the Bulletin Board System (BBS) API. For more information, see the BBS Server repository on GitHub.

Learning how Diego runs an app

The following sections describe how Diego handles a request to run an app. This is only one of the processes that happen in Diego. For example, running an app assumes the app has already been staged.

For more information about the staging process, see How Apps are Staged.

The following illustrations and descriptions do not include all of the components of Diego. For information about each Diego component, see Diego Components.

The architecture discussed in the following steps includes the following high level blocks:

  • api - cloud_controller_ng
  • scheduler - auctioneer
  • diego-api - bbs
  • pxc-mysql - bbs db
  • diego-cell - rep/executor, garden, loggregator-agent, route-emitter
  • singleton-blobstore - droplets
  • doppler - doppler
  • log-api - traffic-controller
  • gorouter - gorouter

Note: The images below are based on the VM names in an open-source deployment of Cloud Foundry Application Runtime. In TAS for VMs, the processes interact in the same way, but are on different VMs. Correct VM names for each process are in the components sections of this topic.

Step 1: Receiving the request to run an app

Cloud Controller passes requests to run apps to the Diego BBS, which stores information about the request in its database.

Several boxes represent the VMs involved in running an app with Diego. They have additional boxes within them to represent the components running on each VM. The boxes are as follows: the api VM includes the cloud_controller_ng process, the scheduler VM includes the auctioneer process, the diego-api VM includes the bbs process, the pxc-mysql VM includes the bbs db, the diego-cell VM includes the rep, loggregator-agent, garden, and route-emitter processes, the singleton-blobstore VM includes the droplets store, the doppler VM includes the doppler process, the log-api VM includes the traffic-controller process, and the gorouter VM includes the gorouter process. An arrow points from cloud_controller_ng to bbs to indicate that the cloud_controller_ng process is sending a request to run an app to the bbs process. Another arrow points from bbs to bbs db to indicate that the request gets stored in a database.

Step 2: Passing the request to the auctioneer process

The BBS contacts the Auctioneer to create an auction based on the desired resources for the app. It references the information stored in its database.

An arrow points from a box labeled bbs to a cylinder labeled bbs db to indicate the bbs process is receiving information from its database. Another arrow points from bbs to a box labeled auctioneer to indicate the bbs process is contacting the auctioneer process to create an auction.

Step 3: Performing the auction

Through an auction, the Auctioneer finds a Diego Cell to run the app on. The Rep job on the Diego Cell accepts the auction request.

An arrow points from a box labeled auctioneer to a box labeled rep to indicate that the auctioneer finds a Diego Cell VM to satisfy the request to run the app.

Step 4: Creating the container and running the app

The in process Executor creates a Garden container in the Diego Cell. Garden downloads the droplet that resulted from the staging process and runs the app in the container.

An arrow points from a box labeled executor to an unlabeled box inside of a box labeled garden. This indicates that the executor sub-process of the rep process on the diego-cell VM communicates to the garden process on the diego-cell VM to create a new container. Another arrow points from garden to a cylinder labeled droplets to indicate that the garden process also communicates with the droplets bucket on the singleton-blobstore VM to download the droplet.

Step 5: Emitting a route for the app

The route-emitter process emits a route registration message to Gorouter for the new app running on the Diego Cell.

An arrow points from a box labeled route-emitter to a box labeled gorouter to indicate that the route-emitter process sends a route registration message to Gorouter.

Step 6: Sending logs to the Loggregator

Loggregator agent forwards app logs, errors, and metrics to the TAS for VMs Loggregator.

For more information, see App Logging in TAS for VMs.

An arrow from an unlabeled box inside a box labeled garden points to a box labeled loggregator-agent. This indicates that the container running the app inside the garden process of the diego-cell VM emits logs to the loggregator-agent process of the diego-cell VM. An arrow from loggregator-agent points to a box labeled doppler, and from doppler to a box labeled traffic-controller to indicate the where logs are sent.

Diego components

The following table describes the jobs that are part of the TAS for VMs Diego BOSH release.

Component Function
Job:
auctioneer
VM:
diego_brain
  • Distributes work through auction to Cell Reps over SSL/TLS. For more information, see How the Diego Auction Allocates Jobs.
  • Maintains a lock in Locket to ensure only one auctioneer handles auctions at a time.
Job:
bbs
VM:
diego_database
  • Maintains a real-time representation of the state of the Diego cluster, including desired LRPs, running LRPs, and in-flight Tasks.
  • Provides an RPC-style API over HTTP to Diego Core components for external clients as well as internal clients, including the SSH Proxy and Route Emitter.
  • Ensures consistency and fault tolerance for Tasks and LRPs by comparing desired state with actual state.
  • Keeps DesiredLRP and ActualLRP counts synchronized. If the DesiredLRP count exceeds the ActualLRP count, requests a start auction from the Auctioneer. If the ActualLRP count exceeds the DesiredLRP count, sends a stop message to the Rep on the Diego Cell hosting an instance
Job:
file_server
VM:
diego_brain
  • Serves static assets that can include general-purpose App Lifecycle binaries
Job:
locket
VM:
diego_database
  • Provides a consistent key-value store for maintenance of distributed locks and component presence
Job:
rep
VM:
diego_cell
  • Represents a Diego Cell in Diego Auctions for Tasks and LRPs
  • Runs Tasks and LRPs by creating a container and then running actions in it
  • Periodically ensures its set of Tasks and ActualLRPs in the BBS is in sync with the containers actually present on the Diego Cell
  • Manages container allocations against resource constraints on the Diego Cell, such as memory and disk space
  • Streams stdout and stderr from container processes to the metron-agent running on the Diego Cell, which in turn forwards to the Loggregator system
  • Periodically collects container metrics and emits them to Loggregator
  • Mediates all communication between the Diego Cell and the BBS
  • Maintains a presence record for the Diego Cell in Locket
Job:
route_emitter
VM:
diego_cell
  • Monitors DesiredLRP and ActualLRP states, emitting route registration and unregistration messages to Gorouter when it detects changes.
  • Periodically emits the entire routing table to the TAS for VMs Gorouter.
Job:
ssh_proxy
VM:
diego_brain
  • Brokers connections between SSH clients and SSH servers
  • Runs inside instance containers and authorizes access to app instances based on Cloud Controller roles

Additional information

The following resources provide more information about Diego components:

Maximum recommended Diego Cells

The maximum recommended Diego Cells is 250 Cells for each TAS for VMs deployment. By default, there is a hard limit of 256 addresses for vSphere deployments that use Silk for networking. This hard limit is described in the Silk Release documentation on GitHub.

The default CIDR address block for the overlay network is 10.255.0.0/16. Each Diego Cell requires a subnet, and subnets (0-255) for each Diego Cell are allocated out of this network.

TAS for VMs deployments that do not use Silk for networking do not have a hard limit. However, operating a foundation with more than 250 Diego Cells is not recommended for the following reasons:

  • Changes to the foundation can take a long time, potentially days or weeks depending on the max-in-flight value. For example, if there is a certificate expiring in a week, there might not be enough time to rotate the certificates before expiry. For more information, see Basic Advice in Configuring TAS for VMs for Upgrades.
  • A single foundation still has single points of failure, such as the certificates on the platform. The RAM that 250 Diego Cells provides is enough to host many business-critical apps.

Components from other releases

The following table describes jobs that interact closely with Diego but are not part of the Diego TAS for VMs BOSH release.

Component Function
Job:
bosh-dns-aliases
VM:
all
  • Provides service discovery through colocated DNS servers on all BOSH-deployed VMs
  • Provides client-side load-balancing by randomly selecting a healthy VM when multiple VMs are available
Job:
cc_uploader
VM:
diego_brain
  • Mediates uploads from the Executor to the Cloud Controller
  • Translates simple HTTP POST requests from the Executor into complex multipart-form uploads for the Cloud Controller
Job:
database
VM:
mysql
  • Provides a consistent key-value data store to Diego
Job:
loggregator-agent
VM:
all
  • Forwards app logs, errors, and app and Diego metrics to the Loggregator Doppler component
Job:
cloud_controller_clock
VM:
clock_global
  • Runs a Diego sync process to ensure desired app data in Diego is in sync with the Cloud Controller.

App lifecycle binaries

The following platform-specific binaries deploy apps and govern their lifecycle:

  • The Builder, which stages a TAS for VMs app. The Builder runs as a Task on every staging request. It performs static analysis on the app code and does any necessary pre-processing before the app is first run.

  • The Launcher, which runs a TAS for VMs app. The Launcher is set as the Action on the DesiredLRP for the app. It executes the start command with the correct system context, including working directory and environment variables.

  • The Healthcheck, which performs a status check on running TAS for VMs app from inside the container. The Healthcheck is set as the Monitor action on the DesiredLRP for the app.

Current implementations

  • The buildpack app lifecycle implements the TAS for VMs buildpack-based deployment strategy. For more information, see the buildpackapplifecycle repository on GitHub.

  • The Docker app lifecycle implements a Docker deployment strategy. For more information, see the dockerapplifecycle repository on GitHub.

Additional information

The following resources provide more information about components from other releases that interact closely with Diego:

check-circle-line exclamation-circle-line close-line
Scroll to top icon