Dedicated Service Clusters (using experimental Projection and Replication APIs)

Caution: This use case leverages experimental APIs. Do not use it in a production environment.

This use case leverages the experimental API Projection and Resource Replication APIs to separate application workloads and service instances onto separate Kubernetes clusters. There are several reasons for it:

  • Dedicated cluster requirements for workload or service clusters: Service clusters, for example, might need access to more powerful SSDs.
  • Different cluster life cycle management: Upgrades to service clusters can occur more cautiously.
  • Unique compliance requirements: Data is stored on a service cluster, which might have different compliance needs.
  • Separation of permissions and access: Application teams can only access the clusters where their applications are running.

The benefits of implementing this use case include:

  • The experience for application developers and application operators working on their Tanzu Application Platform cluster is unaltered.
  • All complexity in the setup and management of backing infrastructure is abstracted away from application developers, which gives them more time to focus on developing their applications.

Note: This use case currently does not support the federation of core Kubernetes APIs such as Secret. It requires a ProvisionedService API that references a Secret in order to work. This means that use cases such as Direct Service References or Cloud Service Provider use cases, support such as Consuming AWS RDS on TAP, will not work when combined with this use case.

For information about network requirements and possible topology setups, see Topology.

Prerequisites

Meet the following prerequisites before completing this use case walkthrough:

  • You have access to a cluster with Tanzu Application Platform installed, henceforth called the application workload cluster.
  • You have access to a second, separate cluster with the Services Toolkit package installed, henceforth called the service cluster.
  • You downloaded and installed the tanzu CLI and the corresponding plug-ins.
  • You downloaded and installed the experimental kubectl-scp plug-in. For instructions, see Install the kubectl-scp plug-in.
  • You set up the default namespace on the application workload cluster as your developer namespace to use installed packages. For more information, see Set up developer namespaces to use installed packages.
  • The application workload cluster can pull source code from GitHub.
  • The service cluster can pull the images required by the RabbitMQ Cluster Kubernetes Operator.
  • The service cluster can create LoadBalancer services.
  • If you have previously installed the RabbitMQ cluster operator to the application workload cluster as part of Getting started with Tanzu Application Platform, uninstall it from that cluster. This is necessary because of a limitation of the experimental API Projection APIs. To delete the operator, run:

    kapp delete -a rmq-operator -y
    

Walkthrough

Follow these steps to bind an application to a service instance running on a different Kubernetes cluster:

  1. As the service operator, link the workload cluster and service cluster together by using the kubectl scp plug-in. To do so, run:

    kubectl scp link --workload-kubeconfig-context=WORKLOAD-CONTEXT --service-kubeconfig-context=SERVICE-CONTEXT
    

    Where WORKLOAD-CONTEXT is your workload context and SERVICE-CONTEXT is your service context.

    Note: You might need to specify the service cluster Kubernetes API address with --service-server-address=CLUSTER-EXAMPLE.com:6443>.

    This is necessary if running kubectl get --raw /api results in an address that is not reachable from the workload cluster or results in an address that doesn’t match the CA certificate in the specified service kubeconfig entry.

  2. Install the RabbitMQ Kubernetes operator in the services cluster by running:

    kapp -y deploy --app rmq-operator \
     --file https://raw.githubusercontent.com/rabbitmq/cluster-operator/lb-binding/hack/deploy.yml \
     --kubeconfig-context SERVICE-CONTEXT
    

    Where SERVICE-CONTEXT is your service context.

    This operator is installed in the service cluster, but RabbitmqCluster service instance life cycles (CRUD) can still be managed from the workload cluster. Use the exact deploy.yml specified in the command because this RabbitMQ operator deployment includes specific changes to enable cross-cluster service binding.

  3. Verify that you installed the operator by running:

    kubectl --context SERVICE-CONTEXT get crds rabbitmqclusters.rabbitmq.com
    

    Where SERVICE-CONTEXT is your service context.

    The rabbitmq.com/v1beta1 API group is available in the service cluster. The following steps federate the rabbitmq.com/v1beta1 in the workload cluster. This occurs in two parts, projection and replication.

    • Projection applies to custom API groups.
    • Replication applies to core Kubernetes resources, such as secrets.
  4. Create a service-instance namespace in both clusters. API projection occurs between clusters by using namespaces with the same name and that are said to have a quality of namespace sameness.

    For example:

    kubectl --context WORKLOAD-CONTEXT create namespace service-instances
    kubectl --context SERVICE-CONTEXT create namespace service-instances
    

    Where WORKLOAD-CONTEXT is your workload context and SERVICE-CONTEXT is your service context.

  5. Use the kubectl-scp plug-in to federate by running:

    kubectl scp federate \
    --workload-kubeconfig-context=WORKLOAD-CONTEXT \
    --service-kubeconfig-context=SERVICE-CONTEXT \
    --namespace=service-instances \
    --api-group=rabbitmq.com \
    --api-version=v1beta1 \
    --api-resource=rabbitmqclusters
    

    Where WORKLOAD-CONTEXT is your workload context and SERVICE-CONTEXT is your service context.

    Note: You might need to specify the service cluster Kubernetes API address with --service-server-address=CLUSTER-EXAMPLE.com:6443.

    This is necessary if running kubectl get --raw /api results in an address that is not reachable from the workload cluster or an address that doesn’t match the CA certificate in the specified service kubeconfig entry.

  6. After federation, verify the rabbitmq.com/v1beta1 API is also available in the workload cluster by running:

    kubectl --context WORKLOAD-CONTEXT api-resources
    

    Where WORKLOAD-CONTEXT is your workload context

  7. Advertise that the RabbitmqCluster API is available to developers by applying the following YAML to your workload cluster. Ensure the Tanzu CLI is configured to target the workload cluster for the rest of the steps.

    ---
    apiVersion: services.apps.tanzu.vmware.com/v1alpha1
    kind: ClusterInstanceClass
    metadata:
     name: rabbitmq
    spec:
     description:
       short: It's a RabbitMQ cluster!
     pool:
       kind: RabbitmqCluster
       group: rabbitmq.com
    
  8. Discover the new service and provision an instance from the workload cluster by running:

    tanzu services classes list
    

    The following output appears:

    tanzu services classes list
    
    NAME      DESCRIPTION
    rabbitmq  It's a RabbitMQ cluster!
    
  9. Provision a service instance on the Tanzu Application Platform cluster.

    For example:

    # rabbitmq-cluster.yaml
    ---
    apiVersion: rabbitmq.com/v1beta1
    kind: RabbitmqCluster
    metadata:
     name: projected-rmq
    spec:
     service:
       type: LoadBalancer
    
  10. Apply the YAML file by running:

    kubectl --context WORKLOAD-CONTEXT -n service-instances apply -f rabbitmq-cluster.yaml
    

    Where WORKLOAD-CONTEXT is your workload context

  11. Confirm that the RabbitmqCluster resource reconciles successfully from the workload cluster by running:

    kubectl --context WORKLOAD-CONTEXT -n service-instances get -f rabbitmq-cluster.yaml
    

    Where WORKLOAD-CONTEXT is your workload context

  12. Verify that RabbitMQ pods are running in the service cluster, but not in the workload cluster, by running:

    kubectl --context WORKLOAD-CONTEXT -n service-instances get pods
    kubectl --context SERVICE-CONTEXT -n service-instances get pods
    

    Where WORKLOAD-CONTEXT is your workload context and SERVICE-CONTEXT is your service context.

  13. Enable cross-namespace claims by creating a ResourceClaimPolicy on your workload cluster:

    # rabbitmq-cluster-policy.yaml
    ---
    apiVersion: services.apps.tanzu.vmware.com/v1alpha1
    kind: ResourceClaimPolicy
    metadata:
      name: rabbitmq-cluster-policy
      namespace: service-instances
    spec:
      consumingNamespaces:
      - default
      subject:
        group: rabbitmq.com
        kind: RabbitmqCluster
    
  14. Apply the YAML file by running:

    kubectl --context WORKLOAD-CONTEXT apply -f rabbitmq-cluster-policy.yaml
    

    Where WORKLOAD-CONTEXT is your workload context

  15. Create a claim for the projected service instance by running:

    tanzu service resource-claim create projected-rmq-claim \
      --resource-name projected-rmq \
      --resource-kind RabbitmqCluster \
      --resource-api-version rabbitmq.com/v1beta1 \
      --resource-namespace service-instances \
      --namespace default
    
  16. Create the application workload by running:

    tanzu apps workload create multi-cluster-binding-sample \
      --namespace default \
      --git-repo https://github.com/sample-accelerators/rabbitmq-sample \
      --git-branch main \
      --git-tag 0.0.1 \
      --type web \
      --label app.kubernetes.io/part-of=rabbitmq-sample \
      --annotation autoscaling.knative.dev/minScale=1 \
      --service-ref "rmq=services.apps.tanzu.vmware.com/v1alpha1:ResourceClaim:projected-rmq-claim"
    
  17. Get the web-app URL by running:

    tanzu apps workload get multi-cluster-binding-sample -n default
    
  18. Visit the URL and refresh the page to confirm the app is running by viewing the new message IDs.

check-circle-line exclamation-circle-line close-line
Scroll to top icon