This topic tells you how to upgrade Supply Chain Security Tools (SCST) - Store and how to troubleshoot upgrading issues.
In Tanzu Application Platform v1.7 and later, VMware introduces Artifact Metadata Repository (AMR) to SCST - Store. Tanzu Application Platform installs AMR components by default after upgrading.
How you must configure AMR depends on how you installed Tanzu Application Platform:
To learn how to configure AMR, see Set up multicluster Artifact Metadata Repository. For instructions to set up SCST - Scan 1.0 to use Metadata Store, see Set up multicluster for Scan 1.0. The relevant new configurations in the topic for Tanzu Application Platform v1.7 and later are:
If you want to upgrade from Tanzu Application Platform v1.6 with AMR beta enabled to Tanzu Application Platform v1.7 and later, see Upgrading from AMR Beta to AMR GA release.
The following sections tell you how to troubleshoot AMR upgrades.
To see the AMR Observer pod, run:
kubectl get pods -n amr-observer-system
To view AMR Observer logs, run:
kubectl logs OBSERVER-POD-NAME -n amr-observer-system
Where OBSERVER-POD-NAME
is the name of the AMR Observer pod.
If you encounter errors related to the authentication token, verify that you configured AMR Observer correctly. It might be missing the edit token for AMR CloudEvent Handler. For information about how to configure the edit token, and the CA certificate and endpoint, see Multicluster setup for SCST - AMR.
To prevent issues with the metadata-store
database, such as the ones described in this topic, the database deployment is StatefulSet
in:
If you have scripts searching for a metadata-store-db
deployment, edit the scripts to instead search for StatefulSet
.
When you use Tanzu to upgrade SCST - Store, there is occasionally data corruption. For example:
PostgreSQL Database directory appears to contain a database; Skipping initialization
2022-01-21 21:53:38.799 UTC [1] LOG: starting PostgreSQL 13.5 (Ubuntu 13.5-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit
2022-01-21 21:53:38.799 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-01-21 21:53:38.799 UTC [1] LOG: listening on IPv6 address "::", port 5432
2022-01-21 21:53:38.802 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-01-21 21:53:38.807 UTC [14] LOG: database system was shut down at 2022-01-21 21:21:12 UTC
2022-01-21 21:53:38.807 UTC [14] LOG: invalid record length at 0/1898BE8: wanted 24, got 0
2022-01-21 21:53:38.807 UTC [14] LOG: invalid primary checkpoint record
2022-01-21 21:53:38.807 UTC [14] PANIC: could not locate a valid checkpoint record
2022-01-21 21:53:39.496 UTC [1] LOG: startup process (PID 14) was terminated by signal 6: Aborted
2022-01-21 21:53:39.496 UTC [1] LOG: aborting startup due to startup process failure
2022-01-21 21:53:39.507 UTC [1] LOG: database system is shut down
The log shows a database pod in a failure loop. For information about how to fix the issue, see the SysOpsPro documentation.
Because the default access mode in the PVC is ReadWriteOnce
, if you are deploying in an environment with multiple nodes then each pod might be on a different node.
This causes the upgraded pod to spin up but then get stuck initializing because the original pod does not stop. To resolve this issue, find and delete the original pod so that the new pod can attach to the persistent volume:
Discover the name of the app pod that is not in a pending state by running:
kubectl get pods -n metadata-store
Delete the pod by running:
kubectl delete pod METADATA-STORE-APP-POD-NAME -n metadata-store