This topic describes how to use Services Toolkit to have a claim resolve to a different backing service resource depending on which environment it is in. This can be used in order to remove the need for application operators to change their ClassClaim
and Workload
as they are promoted through environments, whilst also enabling service operators to change the backing service implementation without further configuration.
A broad overview of what this looks like is the following:
iterate
, run-test
, and run-production
.ClusterInstanceClass
called postgres
.
iterate
cluster, this points at in-cluster bitnami helm instance of postgres.run-test
cluster, this points at in-cluster VMware Tanzu Postgres instances of postgres.run-production
cluster, this points at resources representing instances running in Amazon AWS RDS.ClassClaim
, this gets applied along with a consuming Workload
.
iterate
it resolves to a helm chart instance.run-test
it resolves to a VMware Tanzu Postgres instance.run-production
it resolves to an Amazon AWS RDS instance.ClassClaim
remains identical across the clusters, so there is less work for the Application Operator.Note: The backing service implementations and environment layouts used in this use case are arbitrary and should not be taken as recommendations or requirements.
This usecases requires three separate clusters with TAP 1.4.0 or higher installed in each.
iterate
clusterSelect an arbitrary cluster, we will call this the iterate
cluster. For this cluster, we will create an instance of the Bitnami helm chart for postgres.
Apply the RBAC necessary for the Services Toolkit operator to read the Secrets
:
# `iterate`-stk-secret-reader.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: stk-secret-reader
labels:
servicebinding.io/controller: "true"
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
Add the Bitnami chart repository:
helm repo add bitnami https://charts.bitnami.com/bitnami
Create an instance of the helm chart:
# Make sure to set the database name and user
helm install postgres bitnami/postgresql \
--set auth.username=test \
--set auth.database=test
Apply the following SecretTemplate
resource and the necessary RBAC permissions. This will create a Secret
with the postgres credentials which our workload application can consume. It gets these credentials from the helm-created resources and is specifically labelling the Secret
with the services.apps.tanzu.vmware.com/class: bitnami-postgres
label. Later on, this label will be used by the postgres
class to match instances of the class.
# helm-secret-template.yaml
---
apiVersion: secretgen.carvel.dev/v1alpha1
kind: SecretTemplate
metadata:
name: helm-postgres
spec:
serviceAccountName: helm-reader
inputResources:
- name: pod
ref:
apiVersion: v1
kind: Pod
name: postgres-postgresql-0
- name: service
ref:
apiVersion: v1
kind: Service
name: postgres-postgresql
- name: secret
ref:
apiVersion: v1
kind: Secret
name: $(.pod.spec.containers[?(@.name=="postgresql")].env[?(@.name=="POSTGRES_PASSWORD")].valueFrom.secretKeyRef.name)
template:
metadata:
labels:
services.apps.tanzu.vmware.com/class: bitnami-postgres
stringData:
type: postgresql
port: $(.service.spec.ports[0].port)
database: $(.pod.spec.containers[0].env[?(@.name=="POSTGRES_DB")].value)
host: $(.service.spec.clusterIP)
username: $(.pod.spec.containers[0].env[?(@.name=="POSTGRES_USER")].value)
data:
password: $(.secret.data.password)
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: helm-reader
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: helm-reader
rules:
- apiGroups: [""]
resources: ["services", "secrets", "pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sa-rb-helm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: helm-reader
subjects:
- kind: ServiceAccount
name: helm-reader
Note: If you use this YAML to create a postgres instance in a namespace other than
default
, then aResourceClaimPolicy
must be created that allowsSecrets
with the same labels in that namespace to be claimed fromdefault
.
run-test
clusterSelect a different arbitrary cluster, we will call this the run-test
cluster. For this cluster, we will create a instance of the VMware Tanzu SQL with Postgres for Kubernetes.
Follow the instructions at Installing a Tanzu Postgres Operator in order to have the operator.
Follow the instructions at Deploying a Postgres Instance in order to have a deployed instance.
Note: If the instances deployed are in a namespace other than
default
, aResourceClaimPolicy
must be created that allowsPostgres
in that namespace to be claimed fromdefault
.
run-production
clusterSelect a different arbitrary cluster, we will call this the run-production
cluster. For this cluster, we will create a instance of the Amazon RDS .
In this scenario, we can either:
or
Note: If the instances deployed are in a namespace other than
default
, aResourceClaimPolicy
must be created that allowsSecrets
with the same labels in that namespace to be claimed fromdefault
.
ClusterInstanceClasses
The ClusterInstanceClass
will be the discovery interface that our ClassClaim
will use in order to claim a database resource for our application workload. In order for the ClassClaim
to work in every cluster we apply it, the ClusterInstanceClass
must have the same name as that is what identifies it to the ClassClaim
.
iterate
clusterOn the iterate
cluster, we have helm chart instances of postgres. In order to create a ClusterInstanceClass
through which service instances can be claimed and consumed, we need to define what is the claimable resource of these helm chart instances are. Since nothing in the chart follows the ProvisionedService
duck type of the ServiceBinding
spec, we will use the Secret
produced by the SecretTemplate
we applied.
This results in the following ClusterInstanceClass
that we should apply to the iterate
cluster:
# iterate-clusterinstanceclass.yaml
---
apiVersion: services.apps.tanzu.vmware.com/v1alpha1
kind: ClusterInstanceClass
metadata:
name: postgres
spec:
description:
short: Postgres instances
pool:
kind: Secret
labelSelector:
matchLabels:
services.apps.tanzu.vmware.com/class: bitnami-postgres
run-test
clusterOn the run-test
cluster, we have instances of VMware Tanzu Postgres. Again, we need to configure the class to match against the claimable resources for these instances. Postgres
resource provided by this operator follows the ProvisionedService
duck type of the ServiceBinding
spec, so we will define it as the claimable resource.
This results in the following ClusterInstanceClass
that we should apply to the run-test
cluster:
# run-test-clusterinstanceclass.yaml
---
apiVersion: services.apps.tanzu.vmware.com/v1alpha1
kind: ClusterInstanceClass
metadata:
name: postgres
spec:
description:
short: Postgres instances
pool:
kind: Postgres
group: sql.tanzu.vmware.com
run-production
clusterOn the run-production
cluster, we have instances of Amazon AWS RDS. Again, we need to configure the class to match against the claimable resources for these instances. nothing we created follows the ProvisionedService
duck type of the ServiceBinding
spec, we will use the Secret
produced by either:
SecretTemplate
in the ACK usecase.PostgreSQLInstance
in the Crossplane usecase.Both have the same label on their Secret
so we can use the following ClusterInstanceClass
and apply it to the run-production
cluster:
# run-production-clusterinstanceclass.yaml
---
apiVersion: services.apps.tanzu.vmware.com/v1alpha1
kind: ClusterInstanceClass
metadata:
name: postgres
spec:
description:
short: Postgres instances
pool:
kind: Secret
labelSelector:
matchLabels:
services.apps.tanzu.vmware.com/class: rds-postgres
Workload
and ClassClaim
Firstly, we create our Cartographer Workload
:
# workload.yaml
---
apiVersion: carto.run/v1alpha1
kind: Workload
metadata:
name: pet-clinic
namespace: default
labels:
apps.tanzu.vmware.com/workload-type: web
app.kubernetes.io/part-of: pet-clinic
spec:
params:
- name: annotations
value:
autoscaling.knative.dev/minScale: "1"
env:
- name: SPRING_PROFILES_ACTIVE
value: postgres
serviceClaims:
- name: db
ref:
apiVersion: services.apps.tanzu.vmware.com/v1alpha1
kind: ClassClaim
name: postgres
source:
git:
url: https://github.com/sample-accelerators/spring-petclinic
ref:
branch: main
tag: tap-1.2
This file can be generated and applied with following tanzu
command:
tanzu apps workload create my-workload \
--git-repo https://github.com/sample-accelerators/spring-petclinic \
--git-branch main \
--git-tag tap-1.2 \
--type web \
--label app.kubernetes.io/part-of=spring-petclinic \
--annotation autoscaling.knative.dev/minScale=1 \
--env SPRING_PROFILES_ACTIVE=postgres \
--service-ref db=services.apps.tanzu.vmware.com/v1alpha1:ClassClaim:postgres
And we then want to create the ClassClaim
it references:
# classclaim.yaml
---
apiVersion: services.apps.tanzu.vmware.com/v1alpha1
kind: ClassClaim
metadata:
name: postgres
namespace: default
spec:
classRef:
name: postgres
This file can be generated and applied with following tanzu
command:
tanzu services class-claim create postgres --class postgres
For more details on this command, see the Create ClassClaims.
Apply both these files to the iterate
cluster, and we should find that our application is running and using the helm-created postgres instance.
As we apply the exact same files to the run-test
and run-production
clusters, we will find that the Workload uses the VMware Tanzu Postgres Operator and Amazon AWS RDS postgres instances respectively.