Concourse for VMware Tanzu has traditionally been distributed as a BOSH Release aimed at allowing an operator to deploy Concourse directly from a BOSH director to virtual machines (VMs). Concourse now also provides a Helm Chart release, which instead targets Kubernetes clusters using the templating and release management of Helm. As of v6.7.2 the chart now uses Helm v3.


Helm is the package manager for Kubernetes, a tool that streamlines installing and managing Kubernetes applications. It creates Kubernetes objects that can be submitted to Kubernetes clusters, and materialized into a Concourse deployment using Kubernetes constructs (Deployments, StatefulSets, PersistentVolumeClaims, etc).

A Helm Chart is a versioned package of pre-configured Kubernetes resources. Deploying Concourse via Helm Chart makes it less complicated to deploy, scale, maintain, or upgrade your deployment in the future. This guide aims to walk an operator through the step by step process of deploying with Helm.

If you have not read the Prerequisites and Background Information page, please do so before continuing. It contains important information about required tools and settings to make this process work with VMware Tanzu Kubernetes Grid Integrated Edition (TKGi).

Privileged containers Because Concourse manages its own Linux containers, the worker processes must have superuser privileges, and your cluster must be configured to allow this.

The presence of privileged pods can be a security concern for a Kubernetes cluster. VMware's recommendation is to run a Helm-installed Concourse in its own dedicated cluster in order to avoid any interference from the worker pods to other Kubernetes workloads.

Managing Linux containers without superuser privileges is a subject of active discussion in the Kubernetes community. Research on this topic is scheduled on the Concourse roadmap, so it is possible this requirement may be dropped in a future release.

Cluster Creation Guide If you have not already created your cluster, you can begin the process now so that it completes in the background while you proceed with the other steps below.

While the process of creating a cluster can vary depending on your needs, with TKGi you can get started by following these commands:



  • USERNAME is your TKGi username
  • PASSWORD is your TKGi password
  • API-HOSTNAME is your TKGi API host name
  • CLUSTER-NAME is a name you choose for your cluster
  • CLUSTER-DOMAIN-NAME is a domain name you choose for your cluster.
  • PLAN-NAME is one of the plans returned if you run tkgi plans

See the Creating Clusters guide for the version of TKGi that you're working with for more information.

Prerequisites for Deploying on Kubernetes with Helm

The VMware Concourse team has tested deploying with Helm using the following prerequisites:

Enabling privileged containers on TKGi

To enable the ability to have privileged containers on TKGi, the plan configured to be used in the cluster must be changed.

Head to the plan configuration in OpsManager, and mark the Allow Privileged checkbox near the end:

You can verify if it worked by inspecting the pod security policy, which should indicate that privileged mode is enabled.

$ kubectl describe psp pks-privileged
Name:         pks-privileged
  Allow Privilege Escalation:  true

Download, Tag, and Push Images to Internal Registry

Download Concourse Helm Chart and load images into Docker

  1. If you have not already done so, visit VMware Tanzu Network and download the Concourse Helm Chart.

  2. Unarchive the Helm Chart tarball to a local directory. For example, with version v7.4.0, the tarball will be called concourse-7.4.0.tgz.

    mkdir concourse-helm
    tar xvzf ./concourse-7.4.0.tgz -C ./concourse-helm
    cd ./concourse-helm
  3. Load the container images into a local Docker client by running the following docker load commands one at a time:

    docker load -i ./images/concourse.tar
    docker load -i ./images/postgres.tar

    If you are using Helm v2 (i.e. Concourse versions < v6.7.x) you'll also need to load the helm image:

    docker load -i ./images/helm.tar

    These images are quite large, and there will be no output until Docker is done loading.

    Success Once the loading finishes, you'll see:

    Loaded image: IMAGE-NAME

Tag and push the loaded images to internal registry

Registry Authentication This step assumes that the current `docker` client has already authenticated against the internal registry through a regular `docker login`.

In addition to logging in, if you're using a registry with self-signed certificates, you should also make sure your registry has been added to the 'Insecure Registries' section of the Daemon tab in the Docker settings ui for your current workstation.

For more information about certificates and secure registry concerns, see this article: Test an insecure registry

  1. Begin by exporting a pair of variables to your shell to be reused throughout this process. In your terminal, run the following commands:

    Using Harbor (internal registry)



    • INTERNAL-REGISTRY is the domain of your internal registry - if you are using Harbor, this must correspond to the URL (without scheme)
    • PROJECT-NAME is the name of the project in your registry. If the project does not exist already you will need to make it.

    Using Docker Hub



    • DOCKERHUB-USERNAME is your username on
  2. The .tar file you downloaded contains a directory called images. You need to extract the tag of each image so that you can appropriately tag the images with the internal registry and project name details from the last step.

    To do this, run the following commands:

    export CONCOURSE_IMAGE_TAG=$(cat ./images/ | cut -d ':' -f 2)
    export POSTGRES_IMAGE_TAG=$(cat ./images/ | cut -d ':' -f 2)

    If you are using Helm v2 (i.e. Concourse versions < v6.7.x) you'll also need to tag the helm image:

    export HELM_IMAGE_TAG=$(cat ./images/ | cut -d ':' -f 2)
  3. Tag the images so their names include the internal registry address:

    Using Harbor (internal registry)

    docker tag concourse/concourse:$CONCOURSE_IMAGE_TAG $INTERNAL_REGISTRY/$PROJECT/concourse:$CONCOURSE_IMAGE_TAG

    Using Docker Hub

    docker tag concourse/concourse:$CONCOURSE_IMAGE_TAG $USERNAME/concourse:$CONCOURSE_IMAGE_TAG

    If you are using Helm v2 (i.e. Concourse versions < v6.7.x) you'll also need to tag the helm image:

    Using Harbor (internal registry)


    Using Docker Hub

  4. Push the images to the internal registry by running the following commands in your terminal:

    Using Harbor (internal registry)


    Using Docker Hub

    docker push $USERNAME/concourse:$CONCOURSE_IMAGE_TAG
    docker push $USERNAME/postgres:$POSTGRES_IMAGE_TAG

    If you are using Helm v2 (i.e. Concourse versions < v6.7.x) you'll also need to tag the helm image:

    Using Harbor (internal registry)


    Using Docker Hub

    docker push $USERNAME/helm:$HELM_IMAGE_TAG

    You must have the necessary credentials (and authorization) to push to the targeted project.

Prepare the Kubernetes Cluster

  1. Login to TKGi and get the cluster credentials

    tkgi login -k -u USER -p PASSWORD -a API-HOSTNAME
    tkgi get-credentials CLUSTER-NAME
    kubectl config use-context CLUSTER-NAME

    Next, you need to create a default StorageClass if one does not already exist.

    Tip You can check if you already have a StorageClass by running kubectl get sc.

  2. Create a file called storage-class.yml. For example, with vim, run:

    vim storage-class.yml
  3. Populate the file with the following:

    kind: StorageClass
      name: concourse-storage-class
      annotations: "true"


    • DATASTORE-FROM-VSPHERE is a valid VSphere datastore
  4. Use the following kubectl command to create the storage class on your cluster:

    kubectl create -f storage-class.yml

Success This command should return a response like:

```bash created ``` ## Installing Concourse via Helm Chart
  1. Create a deployment configuration file named deployment-values.yml

    vim deployment-values.yml

    Insert the following snippet:

    Using Harbor (internal registry)

    image: INTERNAL_REGISTRY/PROJECT/concourse
    imagePullSecrets: ["regcred"] # Remove if registry is public
        registry: INTERNAL_REGISTRY
        repository: PROJECT/postgres
        pullSecrets: ["regcred"] # Remove if registry is public


    • INTERNAL_REGISTRY/PROJECT is your registry address and project.

    • CONCOURSE_IMAGE_TAG is the output of

      cat ./images/ | cut -d ':' -f 2
    • POSTGRES_IMAGE_TAG is the output of

      cat ./images/ | cut -d ':' -f 2

    Using Docker Hub

    image: USERNAME/concourse
    imagePullSecrets: ["regcred"] # Remove if registry is public
        repository: USERNAME/postgres
        pullSecrets: ["regcred"] # Remove if registry is public


    • USERNAME is your Docker Hub username.

    • CONCOURSE_IMAGE_TAG is the output of

      cat ./images/ | cut -d ':' -f 2
    • POSTGRES_IMAGE_TAG is the output of

      cat ./images/ | cut -d ':' -f 2
  2. Save and close deployment-values.yml

  3. Deploy with helm

    helm install \
        --create-namespace \
        --values ./deployment-values.yml \


    • DEPLOYMENT-NAME is the name of your choosing for your Concourse Deployment

    Successful Deployment If the helm install command is successful, you will see the following response followed by more information about your cluster:

    NAMESPACE: default


Aside from the typical recommendations for any Concourse installation (see Running a web node and Running a worker node), given the peculiarities of Kubernetes, VMware recommends a few tweaks to the deployments of Concourse on Kubernetes.

For Concourse's workers:

  • Give each Concourse worker pod an entire machine for itself
    • As each worker is responsible for its own set of Garden containers, having multiple Concourse workers in the same Kubernetes node would inevitably lead to having too many Linux containers under the same machine, impacting overall performance
    • This can be achieved with Kubernetes by a combination of affinity rules (by configuring the worker.affinity key, and/or the worker.hardAntiAffinity helper key in the values.yaml file) and taints.

For Concourse's web instances:

  • Have Concourse web pods allocated in different nodes
    • Concourse web instances at time have to stream volumes between workers, thus, needing a lot of dedicated bandwidth depending on the workloads
    • This can be achieved with Kubernetes affinity rules, configurable through the web.affinity key in values.yaml.
check-circle-line exclamation-circle-line close-line
Scroll to top icon