Upgrade compatibility and recommendations for Tanzu Application Platform

This topic gives you general information to help you with upgrading Tanzu Application Platform (commonly known as TAP).

Supported upgrade paths

Tanzu Application Platform v1.7 supports upgrading from v1.6.x.

Compatibility

This section describes the compatibility considerations for Tanzu Application Platform v1.7.

Kubernetes compatibility

For supported Kubernetes versions, see Kubernetes version support for Tanzu Application Platform.

You must use a kubectl version that is within one minor version difference of your cluster. For example, a v1.30 client can communicate with v1.29, v1.30, and v1.31 control planes. Using the latest compatible version of kubectl helps avoid unforeseen issues.

Tanzu CLI compatibility

Tanzu Application Platform v1.7 is compatible with Tanzu CLI v1.0.0 and later and Tanzu CLI plug-ins for Tanzu Application Platform v1.7.x. The Tanzu CLI core v1.2 that is distributed with Tanzu Application Platform is forward and backward compatible with all supported Tanzu Application Platform versions.

There is a group of Tanzu CLI plug-ins that extend the Tanzu CLI core with Tanzu Application Platform specific features. You can install the plug-ins as a group with a single command. Versioned releases of the Tanzu Application Platform specific plug-in group align to each supported Tanzu Application Platform version. This makes it easy to switch between different versions of Tanzu Application Platforms environments.

Recommendations

  • Test the upgrade in a staging or development environment: Use an environment that mirrors your production setup. This allows you to identify any potential issues or compatibility issues with your workloads before performing the upgrade in production.

  • Consider your tolerance for downtime per environment: For example, sandbox might not need to scale out replicas of workloads or Tanzu Application Platform components. Use PodDisruptionBudgets only in environments that require high availability, such as production or higher level test environments. Only configure PodDisruptionBudgets if minScale or replicas are greater than 1.

  • Consider your tolerance for downtime per Tanzu Application Platform profile: You might have different downtime tolerances for each Tanzu Application Platform profile. Take time to understand these as one size does not fit all.

  • Consider the resource requirements for the upgraded Kubernetes cluster: This includes CPU, memory, and storage. Ensure that your underlying infrastructure can support the new version and accommodate any changes in resource use from your workloads. You might want to preemptively scale out your nodes instead of letting autoscaling start.

  • Perform health checks on your cluster and workloads before starting the upgrade: This helps to identify any issues that might impact the upgrade or the stability of your workloads post-upgrade. For more information, see:

  • Implement robust monitoring and observability tools: Use these tools to track the health and performance of your Kubernetes cluster and workloads before, during, and after the upgrade. This helps you to detect any anomalies or performance degradation post-upgrade.

  • Consider the availability needs for workloads: Tune them with the proper number of replicas and corresponding PodDisruptionBudget. Consider customizing the Tanzu Application Platform SupplyChain to stamp out a PodDisruptionBudget for workloads if needed. Knative workloads come with their own PodDisruptionBudget components. Production workloads running a server workload type must have a replica count of 2 or more to avoid downtime during scheduling of resources. Knative resources must have minScale set to 2 or more to avoid downtime.

  • Review compatibility of custom components: Review any custom components you’ve added to the cluster for compatibility with Kubernetes.

  • Review the upgrade preparation checklist: During your upgrade, follow the Upgrade preparation checklist.

Recommendations for apps

  • Ensure that all apps have a readiness or liveness probes: This helps to avoid disruption during upgrades of Kubernetes and Tanzu Application Platform.

  • Consider using pod affinity rules: This helps to spread out workloads across nodes and availability zones.

  • Use a pod disruption budget for workloads: This helps to avoid downtime during upgrades, especially during Kubernetes upgrades.

  • Ensure that your app is protected against failure: For example, the app uses a 12 factor app design and it can handle graceful shutdown.

  • Ensure that you have copies of business critical apps in a non-production cluster: This cluster is where the upgrade can be tested before upgrading the cluster that contains the production instance of the app.

Upgrade preparation checklist

Complete the tasks in the following table before you upgrade Tanzu Application Platform.

Component Description
Upgrade Tanzu CLI Upgrade to the latest Tanzu CLI and plug-in group by following the instructions in Install the Tanzu CLI.
Upgrade Cluster Essentials Upgrade Cluster Essentials. If your cluster is maintained by Tanzu Mission Control, you do not need to install or update Cluster Essentials.
Relocate the Tanzu Application Platform package images Relocate images from tanzu.packages.broadcom.com to your internal registry by following the instructions in Relocate images to a registry.
Relocate the full Tanzu Build Service dependencies (Optional) Relocate the full Tanzu Build Service dependencies to your internal registry by following the instructions in Install the Tanzu Build Service dependencies.

This is not required if you only want the lite dependencies.
Review Kubernetes cluster capacity Verify that you have enough spare CPU and memory capacity on your Run and Build cluster nodes for the upgrade. You need to schedule capacity for the new pods that are created during an upgrade.
Review any custom overlays Check if there were any changes between versions for your custom overlays. For example, for the Supply Chain components. To pull a package after upgrading to view its contents and run a comparison, run:

imgpkg pull -b $(kubectl get $PACKAGE_NAME < -n tap-install \
  -ojson | jq -r '.spec.fetch[0].imgpkgBundle.image') \
  -o /tmp/$PACKAGE_NAME && code /tmp/$PACKAGE_NAME
If you’re using an overlay for Tanzu Developer Portal to surface additional Backstage plug-ins, re-run the steps to rebuild your customized Tanzu Developer Portal image by following the instructions in Build your customized Tanzu Developer Portal with Configurator. The re-built image will be referenced in the Tanzu Developer Portal overlay secret. Rebuilding the image is necessary because Tanzu Developer Portal includes a pinned version of Backstage and versions of node.

Take into account updates to Backstage versions. Review overlays, especially updates to API versions of Kubernetes resources targeted by each overlay.
Review breaking changes for the Tanzu Application Platform release To see the breaking changes, see the Release notes for the version you are upgrading to.
Plan your order of upgrade by cluster and then environment Recommended order to upgrade clusters:
  1. View cluster
  2. Build clusters
  3. Run clusters


Upgrade a sandbox or test environment first before upgrading subsequent environments.

You cannot jump upgrade versions n+2 until Tanzu Application Platform v1.7.x. For the list of versions you can upgrade to Tanzu Application Platform v1.7, see Supported upgrade paths earlier in this topic.
Check the storage capacity of your container registry, such as Harbor or Artifactory, for sufficient storage New containers are created during an upgrade. This can increase the need for storage capacity.
If you are using GitOps RI, tag your GitOps repository to maintain a point in time of your Tanzu Application Platform configuration Example tag commands:

git tag -a v1.x.x
git push origin v1.x.x

View cluster checklists

Pre-upgrade cluster health check - View cluster

Complete the tasks in the following table before you upgrade the Tanzu Application Platform View cluster.

Item Description
Platform health monitoring Ensure that you have some form of monitoring in place to allow observability into your cluster during an upgrade.
Check the health of each packageRepository and PackageInstall on the cluster Run:
tanzu package repository list -A
tanzu package installed list -A
Check the health of all kapp apps on the cluster Run:
kapp list -A
Check capacity of nodes on the cluster.

You might want to preemptively scale out your nodes if they are at a higher capacity instead of letting autoscaling start during an upgrade.
Run:
kubectl top nodes
Verify basic Tanzu Application Platform features on the cluster
  • Test that Tanzu Developer Portal is working.
  • Test that you can generate code from Accelerators.
Check the health of pods on the cluster Run:
kubectl get pods -A --field-selector=status.phase!=Running,status.phase!=Succeeded

Upgrade considerations for Kubernetes - View Cluster

The following table lists information about upgrading Kubernetes specific to the View cluster.

Item Description
Scale out Metadata Store pods for high availability Consider scaling out Metadata Store to have more than one replica to limit downtime. For example:

  metadata_store:
    app_replicas: 3
  
Create a PodDisruptionBudget for the Metadata Store app for high availability For example:

  apiVersion: policy/v1
  kind: PodDisruptionBudget
  metadata:
    name: metadata-store-pdb
    namespace: metadata-store
  spec:
    minAvailable: 60%
    selector:
      matchLabels:
        app: metadata-store-app
  
Configure Contour to be a DaemonSet to avoid downtime. If downtime is acceptable set to type deployment.

This is only applicable if you are upgrading from Tanzu Application Platform v1.6 or earlier. Contour is a deployment by default as of Tanzu Application Platform v1.7.
For example:

  contour:
    envoy:
      workload:
        type: daemonset
  
Scale out Tanzu Developer Portal pods for high availability For example:

  tap_gui:
    deployment:
      replicas: 3
  
When Tanzu Developer Portal pods are scaled out, there is a known issue with generating Accelerators. You must apply the following overlay to the Tanzu Developer Portal package:

  apiVersion: v1
  kind: Secret
  metadata:
    name: tap-gui-session-mgmt-affinity-secret
    namespace: tap-install
  stringData:
    tap-gui-session-mgmt-affinity.yml: |
      #@ load("@ytt:overlay", "overlay")

      #@overlay/match by=overlay.subset({"kind":"HTTPProxy","metadata":{"name":"tap-gui"}})
      ---
      spec:
        routes:
          #@overlay/match by=overlay.subset({"services": [{"name": "server"}]})
          - services: []
            #@overlay/match missing_ok=True
            loadBalancerPolicy:
              strategy: Cookie
  
And add the following to the tap-values.yaml file:

  package_overlays:
    - name: "tap-gui"
      secrets:
        - name: "tap-gui-session-mgmt-affinity-secret"
  
Create a PodDisruptionBudget for Tanzu Developer Portal for high availability For example:

  apiVersion: policy/v1
  kind: PodDisruptionBudget
  metadata:
    name: tap-gui-pdb
    namespace: tap-gui
  spec:
    minAvailable: 60%
    selector:
      matchLabels:
        app: backstage
  

Upgrade considerations for Tanzu Application Platform - View Cluster

The following table lists information about upgrading Tanzu Application Platform specific to the View cluster.

Item Description
If upgrading from Tanzu Application Platform v1.6 or earlier, set up Artifact Metadata Repository (AMR) There are two new endpoints created on the view cluster with AMR. You might need to add them to your allowlist or have custom certificates configured with Tanzu Application Platform v1.7 and later.
  • amr-cloudevent-handler.DOMAIN
  • amr-graphql.DOMAIN
What to expect during the upgrade on the View cluster
  • Tanzu Developer Portal might show a gray or missing status on the Supply Chain page.
  • You might have issues with viewing pages. Consider doing a hard fresh for browsers.
If using GitOps RI, update the version.yaml file with the new Tanzu Application Platform version You can find the file at the following location: /clusters/CLUSTER-NAME/cluster-config/config/tap-install/.tanzu-managed/version.yaml
If using GitOps RI, run the deploy.sh script to pull in latest External Secret Operator version with Tanzu Application Platform You can find the script at the following location: /clusters/CLUSTER-NAME/tanzu-sync/scripts/deploy.sh
If using GitOps RI, make adjustments to your tap-values.yaml file if needed, and push the updated version.yaml and tap-values.yaml files to the Git repository After pushing the changes to Git, run these commands to sync the changes to the cluster and start the upgrade:

kctrl app kick -a sync -n tanzu-sync
kctrl app kick -a tap -n tap-install

Post upgrade checks - View cluster

Complete the tasks in the following table after you upgrade the Tanzu Application Platform View cluster.

Item Description
Check the health of each packageRepository and PackageInstall on the cluster.

They must now display the newest version of Tanzu Application Platform.
Run:
tanzu package repository list -A
tanzu package installed list -A
Check the health of all kapp apps on the cluster Run:
kapp list -A
Check the health of pods on the cluster Run:
kubectl get pods -A --field-selector=status.phase!=Running,status.phase!=Succeeded
Verify that Tanzu Developer Portal starts Log into Tanzu Developer Portal and verify that the basic functionality still works as expected.

Build cluster checklists

Pre-upgrade cluster health check - Build cluster

Complete the tasks in the following table before you upgrade the Tanzu Application Platform Build cluster.

Item Description
Platform health monitoring Ensure that you have some form of monitoring in place to allow observability into your cluster during an upgrade.
Check the health of each packageRepository and PackageInstall on the cluster.

Fix any broken packages or repositories.
Run:
tanzu package repository list -A
tanzu package installed list -A
Check the health of all kapp apps on the cluster.

Fix any kapp apps that are in a bad state before upgrading.
Run:
kapp list -A
Check capacity of nodes on the cluster.

You might want to preemptively scale out your nodes if they are at a higher capacity instead of letting autoscaling start during an upgrade.
Run:
kubectl top nodes
Check the health of workloads on the cluster.

Consider fixing any workloads that are in a bad state before upgrading.
Run:
tanzu apps workload list -A
Check the health of pods on the cluster.

Consider fixing any pods that are in a bad state before upgrading.
Run:
kubectl get pods -A --field-selector=status.phase!=Running,status.phase!=Succeeded

Upgrade considerations for Kubernetes - Build Cluster

The following table lists information about upgrading Kubernetes specific to the Build cluster.

Item Description
Image cache on nodes will be reset As new nodes are spun up on a cluster, it might take longer to pull images as the cache has not been filled. This is only noticeable on the first pull of images.
Consider pre-scaling the number of nodes Upgrading Kubernetes can trigger new pods, which take resources on the node. Node autoscaling will start up, which can take time and impact the window of the upgrade. Consider scaling capacity before an upgrade.
Expect to lose logs on completed pods for workloads Expect to lose pod logs for the tests, builds and scans for a workload. Tanzu Developer Portal will continue to show the build history, but logs will be missing.

Upgrade considerations for Tanzu Application Platform - Build Cluster

The following table lists information about upgrading Tanzu Application Platform specific to the Build cluster.

Item Description
Contour package is removed as of Tanzu Application Platform v1.7 The contour package is deleted from the Build Cluster as of Tanzu Application Platform v1.7.
New versions of Buildpacks from the Tanzu Application Platform upgrade triggers new Tanzu Build Service builds for all workloads When upgrading Tanzu Application Platform, new buildpacks will be introduced. This triggers new Tanzu Build Service builds. Expect workloads affected by updated buildpacks to rebuild. Node autoscaling will happen.

Your container image registry might have bursts of traffic and activity with new builds. Plan accordingly.
Set an overlay for resource limits on Kpack to optimize resources used by builds When new buildpacks are applied during an upgrade, expect all workloads to trigger a new build. This might flood your cluster with build requests. Ensure that you have scaled out enough nodes and the appropriate processors to avoid being starved for CPU or memory.
If upgrading from TAP 1.6 or earlier, set up Artifact Metadata Repository (AMR) Observer on the Build cluster Set up the AMR Observer on the Build cluster by following the instructions in Multicluster setup for Supply Chain Security Tools - Store. This communicates with the AMR EventHandler on the View cluster. For example:

Add the following to the Build cluster tap-values.yaml file:

  amr:
    observer:
      auth:
        kubernetes_service_accounts:
          autoconfigure: false
          enable: true
      cloudevent_handler:
        endpoint: https://amr-cloudevent-handler.DOMAIN
      ca_cert_data: |
        -----BEGIN CERT------
  
Run the following commands on the View cluster to retrieve the AMR CA certificate and access token. Apply the CA certificate and access token on the Build cluster. The CA Cert goes into the tap-values.yaml file. Add the edit token as a Kubernetes secret on the Build cluster.

CEH_CA_CERT_DATA=$(kubectl get secret amr-cloudevent-handler-ingress-cert \
  -n metadata-store -o json | jq -r ".data.\"tls.crt\"" | base64 -d)

CEH_EDIT_TOKEN=$(kubectl get secrets amr-cloudevent-handler-edit-token \
  -n metadata-store -o jsonpath="{.data.token}" | base64 -d)
Example external secret setup:

  ---
  apiVersion: external-secrets.io/v1beta1
  kind: ExternalSecret
  metadata:
    name: amr-observer-edit-token
    namespace: tap-install
  spec:
    secretStoreRef:
      name: tap-install-secrets
      kind: SecretStore
    refreshInterval: "1m"
    target:
      template:
        data:
          token: "{{ .amr_token }}"
    data:
      - secretKey: amr_token
        remoteRef:
          key: PATH-TO-EXTERNAL-SECRET-STORE
          property: amr_token

  ---
  apiVersion: secretgen.carvel.dev/v1alpha1
  kind: SecretExport
  metadata:
    name: amr-observer-edit-token
    namespace: tap-install
  spec:
    toNamespaces:
      - amr-observer-system
  ---
  apiVersion: secretgen.carvel.dev/v1alpha1
  kind: SecretImport
  metadata:
    name: amr-observer-edit-token
    namespace: amr-observer-system
  spec:
    fromNamespace: tap-install
  
Update SupplyChain customizations for any changes ClusterTasks were removed as of Tanzu Application Platform v1.7. For example, if you wrote your own ClusterTemplate that uses the git-writer task, you must reference it as follows:

  taskRef:
    resolver: cluster
    params:
      - name: kind
        value: task
      - name: namespace
        value: tap-tasks
      - name: name
        value: git-writer
  
If you’ve customized the config-writer-template in the ClusterTemplate, Ensure that the configPath property refers to .spec.params instead of .spec.input.params.
Upgrading to Supply Chain Security Tools (SCST) - Scan 2.0 (SCST - Scan 1.0 is deprecated as of Tanzu Application Platform v1.10).

Trivy is the new default scanner.
Upgrading from SCST - Scan 1.0 to SCST - Scan 2.0 in Tanzu Application Platform requires you to coordinate multiple steps across the Build and View clusters. There will be new token or certificate values to pull from the View cluster and configuration on the supply chain to move to Scan 2.0.

The scanning section on the Build cluster now needs the CA and Auth Token from the View cluster for Metadata store. If using GitOps RI, consider storing these in an external secret for sensitive values.

  scanning:
    metadataStore:
      exports:
        ca:
          pem: |
            CONTENTS-OF-MDS-CA-CERT
        auth:
          token: CONTENTS-OF-MDS-AUTH-TOKEN
  
Where CONTENTS-OF-MDS-CA-CERT is the contents of $MDS_CA_CERT and CONTENTS-OF-MDS-AUTH-TOKEN is the contet of $MDS_AUTH_TOKEN.

You can remove the Grype section from your tap-values.yaml file for the Build cluster. Trivy is now the default scanner for Tanzu Application Platform.

Ensure that you delete any configured secrets in the metadata-store-secrets namespace on the build cluster. These are configured automatically, so removing them helps to avoid issues like kapp ownership conflicts.

Configure Trivy Scanner in your tap-values.yaml file on the Build cluster.

  ootb_supply_chain_testing_scanning:
        image_scanner_template_name: image-vulnerability-scan-trivy
        image_scanning_cli:
          image: PATH-TO-RELOCATED-IMAGE
  
After applying the changes to the Build cluster and the reconciliation is successful, run the Namespace Provisioner on the Build cluster to reconcile any changes to each namespace.

Note: If you are moving away from Grype, set the following flag to deactivate Grype for all namespaces:

  namespace_provisioner:
    default_parameters:
      skip_grype: true
  
Note: If you use or you switch between different scanners in Tanzu Application Platform v1.9 and earlier, Metadata Store will show duplicate results in the Tanzu Developer Portal Security Analysis UI.
If using GitOps RI, update the version.yaml file with the new Tanzu Application Platform version You can find the file at the following location: /clusters/CLUSTER-NAME/cluster-config/config/tap-install/.tanzu-managed/version.yaml
If using GitOps RI, run the deploy.sh script to pull in latest External Secret Operator version with Tanzu Application Platform You can find the script at the following location: /clusters/CLUSTER-NAME/tanzu-sync/scripts/deploy.sh
If using GitOps RI, make adjustments to your tap-values.yaml file if needed, and push the updated version.yaml and tap-values.yaml files to the Git repository After pushing the changes to Git, run the these commands to sync changes to the cluster and start the upgrade:

kctrl app kick -a sync -n tanzu-sync
kctrl app kick -a tap -n tap-install

Post upgrade checks - Build cluster

Complete the tasks in the following table after you upgrade the Tanzu Application Platform Build cluster.

Item Description
Check the health of each packageRepository and PackageInstall on the cluster.

They must now display the newest version of Tanzu Application Platform.
Run:
tanzu package repository list -A
tanzu package installed list -A
Check the health of all kapp apps on the cluster Run:
kapp list -A
Check the health of pods on the cluster Run:
kubectl get pods -A --field-selector=status.phase!=Running,status.phase!=Succeeded
Check the health of workloads on the cluster.

Consider fixing any workloads that are in a bad state before upgrading.
Run:
tanzu apps workload list -A
Check for workload scan results in Tanzu Developer Portal and that you can download the workload SBOM in supported formats n/a

Run cluster checklists

Pre-upgrade cluster health check - Run cluster

Complete the tasks in the following table before you upgrade the Tanzu Application Platform Run cluster.

Item Description
Platform health monitoring Ensure that you have some form of monitoring in place to allow observability into your cluster during an upgrade.
Check the health of each packageRepository and PackageInstall on the cluster Run:
tanzu package repository list -A
tanzu package installed list -A
Check the health of all kapp apps on the cluster Run:
kapp list -A
Check capacity of nodes on the cluster Run:
kubectl top nodes
Check the health of pods on the cluster Run:
kubectl get pods -A --field-selector=status.phase!=Running,status.phase!=Succeeded

Upgrade considerations for Kubernetes - Run Cluster

The following table lists information about upgrading Kubernetes specific to the Run cluster.

Item Description
Configure Contour to be a DaemonSet to avoid downtime. If downtime is acceptable set to type deployment.

This is only applicable if you are upgrading from Tanzu Application Platform v1.6 or earlier. Contour is a deployment by default as of Tanzu Application Platform v1.7.
For example:

  contour:
    envoy:
      workload:
        type: daemonset
  
Scale out your AuthServer and use a PodDisruptionBudget to ensure uptime Consider scaling out your AuthServer to multiple replicas and using a
PodDisruptionBudget to ensure uptime.

Example AuthServer:

  apiVersion: sso.apps.tanzu.vmware.com/v1alpha1
  kind: AuthServer
  ...
  spec:
    replicas: 3
  
Example PodDisruptionBudget:

  apiVersion: policy/v1
  kind: PodDisruptionBudget
  metadata:
    name: auth0-authserver
    namespace: shared-services
  spec:
    minAvailable: 50%
    selector:
      matchLabels:
        app.kubernetes.io/part-of: auth0-authserver
  
Scale out Application Configuration Service and use a PodDisruptionBudget to ensure uptime Add replicas to the PackageInstall of Application Configuration Service and a
PodDisruptionBudget.

Example PackageInstall:

  apiVersion: packaging.carvel.dev/v1alpha1
  kind: PackageInstall
  metadata:
    name: acs
    namespace: tap-install
  spec:
    packageRef:
      refName: application-configuration-service.tanzu.vmware.com
      versionSelection:
        constraints: ">=2.1.4"
        prereleases: {}
    serviceAccountName: tap-installer-sa
    values:
      - secretRef:
          name: acs-values
  ---
  apiVersion: v1
  kind: Secret
  metadata:
    name: acs-values
    namespace: tap-install
  stringData:
    values.yaml: |
      reconciler:
        replicas: 3
  
Example PodDisruptionBudget:

  apiVersion: policy/v1
  kind: PodDisruptionBudget
  metadata:
    name: acs
    namespace: application-configuration-service
  spec:
    minAvailable: 60%
    selector:
      matchLabels:
        app: application-configuration-service
  
Consider Bitnami Charts replicas and pod disruption budgets for when running services inside the same cluster While Bitnami Charts are not intended for production, if you are using them in that
manner then consider replicas and pod disruption budgets for each component.
Consider replicas and pod disruption budgets for workloads.

You can use an overlay to create them as part of the SupplyChain by default.
Every workload on Tanzu Application Platform must take replica and disruption
budgets into consideration. Consider adding logic to the supply chain to create them
for each component.

Example overlay on ootb-templates package install:

  apiVersion: v1
  kind: Secret
  metadata:
    name: ootb-templates-overlay-pdb
    namespace: tap-install
    annotations:
      kapp.k14s.io/change-group: "tap-overlays"
  type: Opaque
  stringData:
    overlay-pdb.yml: |
      #@ load("@ytt:overlay", "overlay")
      #@ load("@ytt:data", "data")
      #@overlay/match by=overlay.subset({"kind":"ClusterConfigTemplate", "metadata": {"name": "config-template"}})
      ---
      spec:
        #@overlay/replace via=lambda left, right: "{}\n{}".format(left, '\n'.join(['  {}'.format(x) for x in right.split('\n')]))
        ytt: |
          #@yaml/text-templated-strings
          pdb.yaml: |
            (@ if hasattr(data.values.params, "annotations") and hasattr(data.values.params.annotations, "autoscaling.knative.dev/minScale") and int(getattr(data.values.params.annotations, "autoscaling.knative.dev/minScale")) > 1 : @)
            ---
            apiVersion: policy/v1
            kind: PodDisruptionBudget
            metadata:
              name: (@= data.values.workload.metadata.name @)
            spec:
              maxUnavailable: 1
              selector:
                matchLabels:
                  app.kubernetes.io/part-of: (@= data.values.workload.metadata.name @)
                  app.kubernetes.io/component: run
            (@ end @)
  

Upgrade considerations for Tanzu Application Platform - Run Cluster

The following table lists information about upgrading Tanzu Application Platform specific to the Run cluster.

Item Description
Update replicas of the Application Live View connector and add a PodDisruptionBudget to ensure high availability. Update the Run cluster Application Live View connector to have more replicas:

  appliveview_connector:
    connector:
      deployment:
        enabled: true
        replicas: 3
  
Add a PodDisruptionBudget for the Application Live View Connector onto the cluster:

  apiVersion: policy/v1
  kind: PodDisruptionBudget
  metadata:
    name: alv-connector
    namespace: app-live-view-connector
  spec:
    minAvailable: 60%
    selector:
      matchLabels:
        name: application-live-view-connector
  
Set up Artifact Metadata Repository (AMR) Observer on the Run cluster Set up the AMR Observer on the Run cluster by following the instructions in Multicluster setup for Supply Chain Security Tools - Store. This communicates with the AMR EventHandler on the View cluster. For example:

Add the following to the Run cluster tap-values.yaml file:

  amr:
    observer:
      auth:
        kubernetes_service_accounts:
          autoconfigure: false
          enable: true
      cloudevent_handler:
        endpoint: https://amr-cloudevent-handler.DOMAIN
      ca_cert_data: |
        -----BEGIN CERT------
  
Run the following commands on the View cluster to retrieve the AMR CA certificate and access token. Apply the CA certificate and access token on the Run cluster. The CA Cert goes into the tap-values.yaml file. Add the edit token as a Kubernetes secret on the Run cluster.

CEH_CA_CERT_DATA=$(kubectl get secret amr-cloudevent-handler-ingress-cert \
  -n metadata-store -o json | jq -r ".data.\"tls.crt\"" | base64 -d)

CEH_EDIT_TOKEN=$(kubectl get secrets amr-cloudevent-handler-edit-token \
  -n metadata-store -o jsonpath="{.data.token}" | base64 -d)
Review any customizations to the ClusterDelivery or OOTB Templates n/a
If using GitOps RI, update the version.yaml file with the new Tanzu Application Platform version You can find the file at the following location: /clusters/CLUSTER-NAME/cluster-config/config/tap-install/.tanzu-managed/version.yaml
If using GitOps RI, run the deploy.sh script to pull in latest External Secret Operator version with Tanzu Application Platform You can find the script at the following location: /clusters/CLUSTER-NAME/tanzu-sync/scripts/deploy.sh
If using GitOps RI, make adjustments to your tap-values.yaml file if needed, and push the updated version.yaml and tap-values.yaml files to the Git repository. After pushing the changes to Git, run these commands to sync changes to the cluster and start the upgrade:

kctrl app kick -a sync -n tanzu-sync
kctrl app kick -a tap -n tap-install

Post upgrade checks - Run cluster

Complete the tasks in the following table after you upgrade the Tanzu Application Platform Run cluster.

Item Description
Check the health of each packageRepository and PackageInstall on the cluster.

They must now display the newest version of Tanzu Application Platform.
Run:
tanzu package repository list -A
tanzu package installed list -A
Check the health of all kapp apps on the cluster Run:
kapp list -A
Check the health of pods on the cluster Run:
kubectl get pods -A --field-selector=status.phase!=Running,status.phase!=Succeeded
Check the health of exposed workload endpoints (knative routes and/or ingresses) n/a
If applicable, confirm the status of:
  • APIDescriptors
  • Service claims: ClassClaim and ResourceClaim
n/a
check-circle-line exclamation-circle-line close-line
Scroll to top icon