This topic contains troubleshooting and known issues for Supply Chain Security Tools - Store.

Querying by insight source returns zero CVEs even though there are CVEs in the source scan


When attempting to look up CVE and affected packages, querying insight source get (or other insight source commands) might return zero results due to supply chain configuration and repository URL.


You might have to include different combinations of --repo, --org, --commit due to how the scan-controller populates the software bill of materials (SBOM). For more information see Query vulnerabilities, images, and packages in GitHub.

Persistent volume retains data


If Supply Chain Security Tools - Store is deployed, deleted, redeployed, and the database password is changed during the redeployment, the metadata-store-db pod fails to start. This is caused by the persistent volume used by postgres retaining old data, even though the retention policy is set to DELETE.



Changing the database password deletes your Supply Chain Security Tools - Store data.

To redeploy the app, either use the same database password or follow these steps to erase the data on the volume:

  1. Deploy metadata-store app by using kapp.
  2. Verify that the metadata-store-db-* pod fails.
  3. Run:

    kubectl exec -it metadata-store-db-<some-id> -n metadata-store /bin/bash

    Where <some-id> is the ID generated by Kubernetes and appended to the pod name.

  4. Run rm -rf /var/lib/postgresql/data/* to delete all database data.

    Where /var/lib/postgresql/data/* is the path found in postgres-db-deployment.yaml.

  5. Delete the metadata-store app by using kapp.

  6. Deploy the metadata-store app by using kapp.

Missing persistent volume


After Store is deployed, metadata-store-db pod might fail for missing volume while postgres-db-pv-claim pvc is in PENDING state.

This is because the cluster where Store is deployed does not have storageclass defined. storageclass’s provisioner is responsible for creating the persistent volume after metadata-store-db attaches postgres-db-pv-claim.


  1. Verify that your cluster has storageclass by running kubectl get storageclass.
  2. Create a storageclass in your cluster before deploying Store. For example:

    # This is the storageclass that Kind uses
    kubectl apply -f
    # set the storage class as default
    kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"":"true"}}}'

Builds fail due to volume errors on EKS running Kubernetes v1.23


When installing Store on or upgrading an existing EKS cluster to Kubernetes v1.23, the satabase pod shows:

running PreBind plugin "VolumeBinding": binding volumes: provisioning failed for PVC "postgres-db-pv-claim"


This is due to the CSIMigrationAWS in this Kubernetes version which requires users to install the Amazon Elastic Block Store (EBS) CSI Driver to use EBS volumes.

Store uses the default storage class which uses EBS volumes by default on EKS.


Follow the AWS documentation to install the Amazon EBS CSI Driver before installing Store or before upgrading to Kubernetes v1.23.

Certificate Expiries


The Insight CLI or the Scan Controller fails to connect to the Store.

The logs of the metadata-store-app pod show the following error:

$ kubectl logs deployment/metadata-store-app -c metadata-store-app -n metadata-store
2022/09/12 21:22:07 http: TLS handshake error from write tcp> write: broken pipe


The logs of metadata-store-db show the following error:

$ kubectl logs statefulset/metadata-store-db -n metadata-store
2022-07-20 20:02:51.206 UTC [1] LOG:  database system is ready to accept connections
2022-09-19 18:05:26.576 UTC [13097] LOG:  could not accept SSL connection: sslv3 alert bad certificate


cert-manager rotates the certificates, but the metadata-store and the PostgreSQL db are unaware of the change, and are using the old certificates.


If you see TLS handshake error in the metadata-store-app logs, delete the metadata-store-app pod and wait for it to come back up.

kubectl delete pod metadata-store-app-xxxx -n metadata-store

If you see could not accept SSL connection in the metadata-store-db logs, delete the metadata-store-db pod and wait for it to come back up.

kubectl delete pod metadata-store-db-0 -n metadata-store

Troubleshooting errors from Tanzu Application Platform GUI related to SCST - Store

Different Tanzu Application Platform GUI plug-ins use SCST - Store to display information about vulnerabilities and packages. Some errors visible in Tanzu Application Platform GUI are related to this connection.


In the Supply Chain Choreographer plug-in, you see the error message An error occurred while loading data from the Metadata Store.

Screenshot of Tanzu Application Platform GUI displaying the error message about loading data from the metadata store.


There are multiple potential causes. The most common cause is tap-values.yaml missing the configuration that enables Tanzu Application Platform GUI to communicate with Supply Chain Security Tools - Store.


See Supply Chain Choreographer - Enable CVE scan results for the necessary configuration to add to tap-values.yaml. After adding the configuration, update your Tanzu Application Platform deployment or Tanzu Application Platform GUI deployment with the new values.

check-circle-line exclamation-circle-line close-line
Scroll to top icon