Use canary deployment with Contour and Carvel packages for Supply Chain Choreographer

This topic tells you how to automate canary releases using Contour ingress controller and Flagger with Packages and PackageInstalls.

Canary deployment is an incremental approach to releasing an application where traffic is divided between the existing deployed version and a new version. It introduces the new version to a smaller group of users before extending it to the entire user base.

Prerequisites

To use canary deployment, you must complete the following prerequisites:

How to use Contour ingress controller and Flagger to create a canary release

Flagger is a tool that facilitates a controlled process of shifting traffic to a canary deployment while monitoring essential performance metrics, such as the success rate of HTTP requests, average request duration, and the health of the application’s pods. By analyzing these key indicators, Flagger decides whether to promote the canary deployment to a wider audience or abort it if any issues arise.

For information about creating canary releases, see Contour canary Deployments in the Flagger documentation.

Using Contour ingress controller and Flagger to create a canary release involves the following steps:

  1. Install Flagger on your Kubernetes cluster.

  2. Add the label app.kubernetes.io/name to your Workload.

    The target deployment must have a single label selector in the format app: <DEPLOYMENT-NAME>. In addition to app, Flagger supports name and app.kubernetes.io/name selectors.

    Note

    You can use tanzu apps workload apply <WORKLOAD-NAME> --label "app.kubernetes.io/name=<WORKLOAD-NAME>" to edit an existing workload and add the required label.

  3. Create a canary resource that defines a canary release with progressive traffic shifting.

    To configure the canary resource, you must specify the Kubernetes Deployment that corresponds to your Workload. Ensure that you set spec.targetRef.name to match the name of your Tanzu Application Platform Workload, which is the same as its Kubernetes Deployment name.

    Flagger generates some Kubernetes objects. The primary deployment represents the stable release of your application. It receives all incoming traffic while the target deployment is scaled down to zero. Flagger monitors changes in the target deployment, including secrets and configmaps. Before promoting the new version as the primary release, Flagger conducts a thorough canary analysis to ensure that it is stabile.

    In the canary resource, you can define the canary analysis. Flagger uses this analysis definition to verify the duration of the canary phase, which runs periodically until it reaches the maximum traffic weight or the specified number of iterations.

    During each iteration, Flagger executes webhooks, evaluates metrics, and checks for any exceeded failed checks threshold. If the threshold is surpassed, indicating potential issues, Flagger takes immediate action to halt the analysis and roll back the canary. If no issues are found, Flagger rolls out the new version as the primary release.

    An example of a canary resource created for a workload, called tanzu-java-web-app, is deployed in the namespace dev-namespace. Replace myapps.tanzu.biz with your own domain:

    apiVersion: flagger.app/v1beta1
    kind: Canary
    metadata:
      name: tanzu-java-web-app
      namespace: dev-namespace
    spec:
      # deployment reference
      targetRef:
        apiVersion: apps/v1
        kind: Deployment
        name: tanzu-java-web-app
      service:
        # name of the Kubernetes service generated by Flagger. Defaults to spec.targetRef.name
        name: tanzu-java-web-app-flagger-service
        # service port
        port: 80
        # container port
        targetPort: 8080
        # Contour request timeout
        timeout: 15s
        # Contour retry policy
        retries:
          attempts: 3
          perTryTimeout: 5s
          # supported values for retryOn - https://projectcontour.io/docs/main/config/api/#projectcontour.io/v1.RetryOn
          retryOn: "5xx"
      # define the canary analysis timing and KPIs
      analysis:
        # schedule interval (default 60s)
        interval: 30s
        # max number of failed metric checks before rollback
        threshold: 5
        # max traffic percentage routed to canary
        # percentage (0-100)
        maxWeight: 50
        # canary increment step
        # percentage (0-100)
        stepWeight: 5
        # Contour Prometheus checks
        metrics:
          - name: request-success-rate
            # minimum req success rate (non 5xx responses)
            # percentage (0-100)
            thresholdRange:
              min: 99
            interval: 1m
          - name: request-duration
            # maximum req duration P99 in milliseconds
            thresholdRange:
              max: 500
            interval: 30s
        # testing
        webhooks:
          - name: acceptance-test
            type: pre-rollout
            url: http://flagger-loadtester.test/
            timeout: 30s
            metadata:
              type: bash
              cmd: "curl -s http://tanzu-java-web-app-flagger-service-canary.dev-namespace | grep Greetings"
          - name: load-test
            url: http://flagger-loadtester.test/
            type: rollout
            timeout: 5s
            metadata:
              cmd: "hey -z 1m -q 10 -c 2 -host tanzu-java-web-app.myapps.tanzu.biz http://envoy.tanzu-system-ingress"
    
  4. Save the resource you created as canary.yaml and apply it to your cluster:

    export WORKLOAD_NAMESPACE=dev-namespace
    kubectl apply -n $WORKLOAD_NAMESPACE -f canary.yaml
    

    In the earlier example, you use the load testing service flagger-loadtester to generate traffic during the canary analysis.

  5. Install the load testing service by running:

    kubectl create ns test
    kubectl apply -k https://github.com/fluxcd/flagger//kustomize/tester?ref=main
    
  6. Add an HTTPProxy to include the proxy generated by Flagger.

    The following example HTTPProxy resource references tanzu-java-web-app-flagger-service and is deployed in the namespace dev-namespace:

    apiVersion: projectcontour.io/v1
    kind: HTTPProxy
    metadata:
      name: tanzu-java-web-app-ingress
      namespace: dev-namespace
    spec:
      virtualhost:
        fqdn: tanzu-java-web-app.myapps.tanzu.biz
      includes:
      - name: tanzu-java-web-app-flagger-service
        namespace: dev-namespace
        conditions:
          - prefix: /
    
  7. Save the resource you created as httpproxy.yaml and then apply it to your cluster by running:

    export WORKLOAD_NAMESPACE=dev-namespace
    kubectl apply -n $WORKLOAD_NAMESPACE -f httpproxy.yaml
    
  8. Confirm the HTTPProxy created by Contour has Valid status by running:

    kubectl get httpproxies -n $WORKLOAD_NAMESPACE
    
  9. Make changes to GitOps repository and observe the progressive delivery in action. As changes are made to your GitOps repository, the GitOps tools in place in your environment, such as Flux CD and Argo CD, deploy the new Packages onto your clusters.

    Flagger detects any changes to the target deployment, including secrets and configmaps, and starts a new rollout. The new version is either promoted or rolled back.

    You can monitor the traffic shifting by running:

    watch kubectl get canary -n $WORKLOAD_NAMESPACE
    kubectl describe canary
    CANARY-RESOURCE-NAME -n $WORKLOAD_NAMESPACE
    

    Where CANARY-RESOURCE-NAME is the name of the canary resource you want to use.

References and further reading

The following topics give you more information about canary:

check-circle-line exclamation-circle-line close-line
Scroll to top icon