Introducing Different Service Implementatations in Different Environments

This topic describes how to use Services Toolkit to have a claim resolve to a different backing service resource depending on which environment it is in. This can be used in order to remove the need for application operators to change their ClassClaim and Workload as they are promoted through environments, whilst also enabling service operators to change the backing service implementation without further configuration.

A broad overview of what this looks like is the following:

Postgres class across three clusters

  • There are three clusters: iterate, run-test, and run-production.
  • In each cluster, the Services Operator has created a ClusterInstanceClass called postgres.
    • In the iterate cluster, this points at in-cluster bitnami helm instance of postgres.
    • In the run-test cluster, this points at in-cluster VMware Tanzu Postgres instances of postgres.
    • In the run-production cluster, this points at resources representing instances running in Amazon AWS RDS.
  • The App Operator creates a ClassClaim, this gets applied along with a consuming Workload.
    • When it is applied in iterate it resolves to a helm chart instance.
    • When it is promoted to run-test it resolves to a VMware Tanzu Postgres instance.
    • When it is promoted in run-production it resolves to an Amazon AWS RDS instance.
    • Note that the definition of the ClassClaim remains identical across the clusters, so there is less work for the Application Operator.

Note: The backing service implementations and environment layouts used in this use case are arbitrary and should not be taken as recommendations or requirements.

Prerequisites

This usecases requires three separate clusters with TAP 1.4.0 or higher installed in each.

Setup Postgres Bitnami Helm Chart on the iterate cluster

Select an arbitrary cluster, we will call this the iterate cluster. For this cluster, we will create an instance of the Bitnami helm chart for postgres.

  1. Apply the RBAC necessary for the Services Toolkit operator to read the Secrets:

    # `iterate`-stk-secret-reader.yaml
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: stk-secret-reader
      labels:
        servicebinding.io/controller: "true"
    rules:
    - apiGroups: [""]
      resources: ["secrets"]
      verbs: ["get", "list", "watch"]
    
  2. Add the Bitnami chart repository:

    helm repo add bitnami https://charts.bitnami.com/bitnami
    
  3. Create an instance of the helm chart:

    # Make sure to set the database name and user
    helm install postgres bitnami/postgresql \
      --set auth.username=test \
      --set auth.database=test
    
  4. Apply the following SecretTemplate resource and the necessary RBAC permissions. This will create a Secret with the postgres credentials which our workload application can consume. It gets these credentials from the helm-created resources and is specifically labelling the Secret with the services.apps.tanzu.vmware.com/class: bitnami-postgres label. Later on, this label will be used by the postgres class to match instances of the class.

    # helm-secret-template.yaml
    ---
    apiVersion: secretgen.carvel.dev/v1alpha1
    kind: SecretTemplate
    metadata:
      name: helm-postgres
    spec:
      serviceAccountName: helm-reader
      inputResources:
      - name: pod
        ref:
          apiVersion: v1
          kind: Pod
          name: postgres-postgresql-0
      - name: service
        ref:
          apiVersion: v1
          kind: Service
          name: postgres-postgresql
      - name: secret
        ref:
          apiVersion: v1
          kind: Secret
          name: $(.pod.spec.containers[?(@.name=="postgresql")].env[?(@.name=="POSTGRES_PASSWORD")].valueFrom.secretKeyRef.name)
      template:
        metadata:
          labels:
            services.apps.tanzu.vmware.com/class: bitnami-postgres
        stringData:
          type: postgresql
          port: $(.service.spec.ports[0].port)
          database: $(.pod.spec.containers[0].env[?(@.name=="POSTGRES_DB")].value)
          host: $(.service.spec.clusterIP)
          username: $(.pod.spec.containers[0].env[?(@.name=="POSTGRES_USER")].value)
        data:
          password: $(.secret.data.password)
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: helm-reader
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: helm-reader
    rules:
    - apiGroups: [""]
      resources: ["services", "secrets", "pods"]
      verbs: ["get", "list", "watch"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: sa-rb-helm
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: helm-reader
    subjects:
    - kind: ServiceAccount
      name: helm-reader
    

Note: If you use this YAML to create a postgres instance in a namespace other than default, then a ResourceClaimPolicy must be created that allows Secrets with the same labels in that namespace to be claimed from default.

Setup VMware Tanzu SQL with Postgres on the run-test cluster

Select a different arbitrary cluster, we will call this the run-test cluster. For this cluster, we will create a instance of the VMware Tanzu SQL with Postgres for Kubernetes.

  1. Follow the instructions at Installing a Tanzu Postgres Operator in order to have the operator.

  2. Follow the instructions at Deploying a Postgres Instance in order to have a deployed instance.

Note: If the instances deployed are in a namespace other than default, a ResourceClaimPolicy must be created that allows Postgres in that namespace to be claimed from default.

Setup Amazon AWS RDS on the run-production cluster

Select a different arbitrary cluster, we will call this the run-production cluster. For this cluster, we will create a instance of the Amazon RDS .

In this scenario, we can either:

or

Note: If the instances deployed are in a namespace other than default, a ResourceClaimPolicy must be created that allows Secrets with the same labels in that namespace to be claimed from default.

Create the ClusterInstanceClasses

The ClusterInstanceClass will be the discovery interface that our ClassClaim will use in order to claim a database resource for our application workload. In order for the ClassClaim to work in every cluster we apply it, the ClusterInstanceClass must have the same name as that is what identifies it to the ClassClaim.

The iterate cluster

On the iterate cluster, we have helm chart instances of postgres. In order to create a ClusterInstanceClass through which service instances can be claimed and consumed, we need to define what is the claimable resource of these helm chart instances are. Since nothing in the chart follows the ProvisionedService duck type of the ServiceBinding spec, we will use the Secret produced by the SecretTemplate we applied.

This results in the following ClusterInstanceClass that we should apply to the iterate cluster:

# iterate-clusterinstanceclass.yaml
---
apiVersion: services.apps.tanzu.vmware.com/v1alpha1
kind: ClusterInstanceClass
metadata:
  name: postgres
spec:
  description:
    short: Postgres instances
  pool:
    kind: Secret
    labelSelector:
      matchLabels:
        services.apps.tanzu.vmware.com/class: bitnami-postgres

The run-test cluster

On the run-test cluster, we have instances of VMware Tanzu Postgres. Again, we need to configure the class to match against the claimable resources for these instances. Postgres resource provided by this operator follows the ProvisionedService duck type of the ServiceBinding spec, so we will define it as the claimable resource.

This results in the following ClusterInstanceClass that we should apply to the run-test cluster:

# run-test-clusterinstanceclass.yaml
---
apiVersion: services.apps.tanzu.vmware.com/v1alpha1
kind: ClusterInstanceClass
metadata:
  name: postgres
spec:
  description:
    short: Postgres instances
  pool:
    kind: Postgres
    group: sql.tanzu.vmware.com

The run-production cluster

On the run-production cluster, we have instances of Amazon AWS RDS. Again, we need to configure the class to match against the claimable resources for these instances. nothing we created follows the ProvisionedService duck type of the ServiceBinding spec, we will use the Secret produced by either:

  • the SecretTemplate in the ACK usecase.
  • the PostgreSQLInstance in the Crossplane usecase.

Both have the same label on their Secret so we can use the following ClusterInstanceClass and apply it to the run-production cluster:

# run-production-clusterinstanceclass.yaml
---
apiVersion: services.apps.tanzu.vmware.com/v1alpha1
kind: ClusterInstanceClass
metadata:
  name: postgres
spec:
  description:
    short: Postgres instances
  pool:
    kind: Secret
    labelSelector:
      matchLabels:
        services.apps.tanzu.vmware.com/class: rds-postgres

Create and Promote the Workload and ClassClaim

Firstly, we create our Cartographer Workload:

# workload.yaml
---
apiVersion: carto.run/v1alpha1
kind: Workload
metadata:
  name: pet-clinic
  namespace: default
  labels:
    apps.tanzu.vmware.com/workload-type: web
    app.kubernetes.io/part-of: pet-clinic
spec:
  params:
  - name: annotations
    value:
      autoscaling.knative.dev/minScale: "1"
  env:
  - name: SPRING_PROFILES_ACTIVE
    value: postgres
  serviceClaims:
  - name: db
    ref:
      apiVersion: services.apps.tanzu.vmware.com/v1alpha1
      kind: ClassClaim
      name: postgres
  source:
    git:
      url: https://github.com/sample-accelerators/spring-petclinic
      ref:
        branch: main
        tag: tap-1.2

This file can be generated and applied with following tanzu command:

tanzu apps workload create my-workload \
   --git-repo https://github.com/sample-accelerators/spring-petclinic \
   --git-branch main \
   --git-tag tap-1.2 \
   --type web \
   --label app.kubernetes.io/part-of=spring-petclinic \
   --annotation autoscaling.knative.dev/minScale=1 \
   --env SPRING_PROFILES_ACTIVE=postgres \
   --service-ref db=services.apps.tanzu.vmware.com/v1alpha1:ClassClaim:postgres

And we then want to create the ClassClaim it references:

# classclaim.yaml
---
apiVersion: services.apps.tanzu.vmware.com/v1alpha1
kind: ClassClaim
metadata:
  name: postgres
  namespace: default
spec:
  classRef:
    name: postgres

This file can be generated and applied with following tanzu command:

tanzu services class-claim create postgres --class postgres

For more details on this command, see the Create ClassClaims.

Apply both these files to the iterate cluster, and we should find that our application is running and using the helm-created postgres instance.

As we apply the exact same files to the run-test and run-production clusters, we will find that the Workload uses the VMware Tanzu Postgres Operator and Amazon AWS RDS postgres instances respectively.

check-circle-line exclamation-circle-line close-line
Scroll to top icon