Concourse for VMware Tanzu has traditionally been distributed as a BOSH Release aimed at allowing an operator to deploy Concourse directly from a BOSH director to virtual machines (VMs). Concourse now also provides a Helm Chart release, which instead targets Kubernetes clusters using the templating and release management of Helm. As of v6.7.2 the chart now uses Helm v3.
Helm is the package manager for Kubernetes, a tool that streamlines installing and managing Kubernetes applications. It creates Kubernetes objects that can be submitted to Kubernetes clusters, and materialized into a Concourse deployment using Kubernetes constructs (Deployments, StatefulSets, PersistentVolumeClaims, etc).
A Helm Chart is a versioned package of pre-configured Kubernetes resources. Deploying Concourse via Helm Chart makes it less complicated to deploy, scale, maintain, or upgrade your deployment in the future. This guide aims to walk an operator through the step by step process of deploying with Helm.
If you have not read the Prerequisites and Background Information page, please do so before continuing. It contains important information about required tools and settings to make this process work with VMware Tanzu Kubernetes Grid Integrated Edition (TKGi).
Privileged containers Because Concourse manages its own Linux containers, the worker processes must have superuser privileges, and your cluster must be configured to allow this.
The presence of privileged pods can be a security concern for a Kubernetes cluster. VMware's recommendation is to run a Helm-installed Concourse in its own dedicated cluster in order to avoid any interference from the worker pods to other Kubernetes workloads.
Managing Linux containers without superuser privileges is a subject of active discussion in the Kubernetes community. Research on this topic is scheduled on the Concourse roadmap, so it is possible this requirement may be dropped in a future release.
Cluster Creation Guide If you have not already created your cluster, you can begin the process now so that it completes in the background while you proceed with the other steps below.
While the process of creating a cluster can vary depending on your needs, with TKGi you can get started by following these commands:
tkgi login -k -u USERNAME -p PASSWORD -a API-HOSTNAME
tkgi create-cluster CLUSTER-NAME -e CLUSTER-DOMAIN-NAME -p PLAN-NAME
Where:
tkgi plans
See the Creating Clusters guide for the version of TKGi that you're working with for more information.
The VMware Concourse team has tested deploying with Helm using the following prerequisites:
To enable the ability to have privileged containers on TKGi, the plan configured to be used in the cluster must be changed.
Head to the plan configuration in OpsManager, and mark the Allow Privileged
checkbox near the end:
You can verify if it worked by inspecting the pod security policy, which should indicate that privileged mode is enabled.
$ kubectl describe psp pks-privileged
Name: pks-privileged
...
Spec:
Allow Privilege Escalation: true
If you have not already done so, visit VMware Tanzu Network and download the Concourse Helm Chart.
Unarchive the Helm Chart tarball to a local directory. For example, with version v7.4.0, the tarball will be called concourse-7.4.0.tgz
.
mkdir concourse-helm
tar xvzf ./concourse-7.4.0.tgz -C ./concourse-helm
cd ./concourse-helm
Load the container images into a local Docker client by running the following docker load
commands one at a time:
docker load -i ./images/concourse.tar
docker load -i ./images/postgres.tar
If you are using Helm v2 (i.e. Concourse versions < v6.7.x) you'll also need to load the helm
image:
docker load -i ./images/helm.tar
These images are quite large, and there will be no output until Docker is done loading.
Success Once the loading finishes, you'll see:
Loaded image: IMAGE-NAME
Registry Authentication This step assumes that the current `docker` client has already authenticated against the internal registry through a regular `docker login`.
In addition to logging in, if you're using a registry with self-signed certificates, you should also make sure your registry has been added to the 'Insecure Registries' section of the Daemon tab in the Docker settings ui for your current workstation.
For more information about certificates and secure registry concerns, see this article: Test an insecure registry
Begin by exporting a pair of variables to your shell to be reused throughout this process. In your terminal, run the following commands:
Using Harbor (internal registry)
export INTERNAL_REGISTRY=INTERNAL-REGISTRY
export PROJECT=PROJECT-NAME
Where:
INTERNAL-REGISTRY
is the domain of your internal registry - if you are using Harbor, this must correspond to the URL (without scheme)PROJECT-NAME
is the name of the project in your registry. If the project does not exist already you will need to make it.Using Docker Hub
export USERNAME=DOCKERHUB-USERNAME
Where:
DOCKERHUB-USERNAME
is your username on hub.docker.ioThe .tar
file you downloaded contains a directory called images
. You need to extract the tag of each image so that you can appropriately tag the images with the internal registry and project name details from the last step.
To do this, run the following commands:
export CONCOURSE_IMAGE_TAG=$(cat ./images/concourse.tar.name | cut -d ':' -f 2)
export POSTGRES_IMAGE_TAG=$(cat ./images/postgres.tar.name | cut -d ':' -f 2)
If you are using Helm v2 (i.e. Concourse versions < v6.7.x) you'll also need to tag the helm
image:
export HELM_IMAGE_TAG=$(cat ./images/helm.tar.name | cut -d ':' -f 2)
Tag the images so their names include the internal registry address:
Using Harbor (internal registry)
docker tag concourse/concourse:$CONCOURSE_IMAGE_TAG $INTERNAL_REGISTRY/$PROJECT/concourse:$CONCOURSE_IMAGE_TAG
docker tag dev.registry.pivotal.io/concourse/postgres:$POSTGRES_IMAGE_TAG $INTERNAL_REGISTRY/$PROJECT/postgres:$POSTGRES_IMAGE_TAG
Using Docker Hub
docker tag concourse/concourse:$CONCOURSE_IMAGE_TAG $USERNAME/concourse:$CONCOURSE_IMAGE_TAG
docker tag dev.registry.pivotal.io/concourse/postgres:$POSTGRES_IMAGE_TAG $USERNAME/postgres:$POSTGRES_IMAGE_TAG
If you are using Helm v2 (i.e. Concourse versions < v6.7.x) you'll also need to tag the helm
image:
Using Harbor (internal registry)
docker tag dev.registry.pivotal.io/concourse/helm:$HELM_IMAGE_TAG $INTERNAL_REGISTRY/$PROJECT/helm:$HELM_IMAGE_TAG
Using Docker Hub
docker tag dev.registry.pivotal.io/concourse/helm:$HELM_IMAGE_TAG $USERNAME/helm:$HELM_IMAGE_TAG
Push the images to the internal registry by running the following commands in your terminal:
Using Harbor (internal registry)
docker push $INTERNAL_REGISTRY/$PROJECT/concourse:$CONCOURSE_IMAGE_TAG
docker push $INTERNAL_REGISTRY/$PROJECT/postgres:$POSTGRES_IMAGE_TAG
Using Docker Hub
docker push $USERNAME/concourse:$CONCOURSE_IMAGE_TAG
docker push $USERNAME/postgres:$POSTGRES_IMAGE_TAG
If you are using Helm v2 (i.e. Concourse versions < v6.7.x) you'll also need to tag the helm
image:
Using Harbor (internal registry)
docker push $INTERNAL_REGISTRY/$PROJECT/helm:$HELM_IMAGE_TAG
Using Docker Hub
docker push $USERNAME/helm:$HELM_IMAGE_TAG
You must have the necessary credentials (and authorization) to push to the targeted project.
Login to TKGi and get the cluster credentials
tkgi login -k -u USER -p PASSWORD -a API-HOSTNAME
tkgi get-credentials CLUSTER-NAME
kubectl config use-context CLUSTER-NAME
Next, you need to create a default StorageClass
if one does not already exist.
Tip You can check if you already have a StorageClass
by running kubectl get sc
.
Create a file called storage-class.yml
. For example, with vim
, run:
vim storage-class.yml
Populate the file with the following:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: concourse-storage-class
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/vsphere-volume
parameters:
datastore: DATASTORE-FROM-VSPHERE
Where:
DATASTORE-FROM-VSPHERE
is a valid VSphere datastoreUse the following kubectl
command to create the storage class on your cluster:
kubectl create -f storage-class.yml
Success This command should return a response like:
```bash storageclass.storage.k8s.io/concourse-storage-class created ``` ## Installing Concourse via Helm ChartCreate a deployment configuration file named deployment-values.yml
vim deployment-values.yml
Insert the following snippet:
Using Harbor (internal registry)
---
image: INTERNAL_REGISTRY/PROJECT/concourse
imageTag: CONCOURSE_IMAGE_TAG
imagePullSecrets: ["regcred"] # Remove if registry is public
postgresql:
image:
registry: INTERNAL_REGISTRY
repository: PROJECT/postgres
tag: POSTGRES_IMAGE_TAG
pullSecrets: ["regcred"] # Remove if registry is public
Where:
INTERNAL_REGISTRY/PROJECT
is your registry address and project.
CONCOURSE_IMAGE_TAG
is the output of
cat ./images/concourse.tar.name | cut -d ':' -f 2
POSTGRES_IMAGE_TAG
is the output of
cat ./images/postgres.tar.name | cut -d ':' -f 2
Using Docker Hub
---
image: USERNAME/concourse
imageTag: CONCOURSE_IMAGE_TAG
imagePullSecrets: ["regcred"] # Remove if registry is public
postgresql:
image:
registry: docker.io
repository: USERNAME/postgres
tag: POSTGRES_IMAGE_TAG
pullSecrets: ["regcred"] # Remove if registry is public
Where:
USERNAME
is your Docker Hub username.
CONCOURSE_IMAGE_TAG
is the output of
cat ./images/concourse.tar.name | cut -d ':' -f 2
POSTGRES_IMAGE_TAG
is the output of
cat ./images/postgres.tar.name | cut -d ':' -f 2
Save and close deployment-values.yml
Deploy with helm
helm install \
DEPLOYMENT-NAME \
--create-namespace \
--values ./deployment-values.yml \
./charts
Where:
DEPLOYMENT-NAME
is the name of your choosing for your Concourse DeploymentSuccessful Deployment If the helm install
command is successful, you will see the following response followed by more information about your cluster:
NAME: DEPLOYMENT-NAME
LAST DEPLOYED: DEPLOYMENT-DATE
NAMESPACE: default
STATUS: DEPLOYED
...
Aside from the typical recommendations for any Concourse installation (see Running a web node and Running a worker node), given the peculiarities of Kubernetes, VMware recommends a few tweaks to the deployments of Concourse on Kubernetes.
For Concourse's workers:
worker.affinity
key, and/or the worker.hardAntiAffinity
helper key in the values.yaml
file) and taints.For Concourse's web instances:
web.affinity
key in values.yaml
.