This topic describes how you can deploy Supply Chain Security Tools (SCST) - Store in a multicluster setup, including installing multiple profiles such as, View, Build, Run, and Iterate.
After installing the View profile, but before installing the Build profile and Run profile, you must copy certain configurations from the View cluster to the Build and Run Kubernetes clusters. This topic explains how to add these configurations which allows components in the Build and Run clusters to communicate with SCST - Store in the View cluster.
You must first install the View profile. See Install View profile. This installation automatically creates the CA certificates and tokens necessary to talk to the CloudEvent Handler.
To deploy Supply Chain Security Tools (SCST) - Store in a multicluster setup:
NoteThis topic assumes that you are using SCST - Scan 2.0, as described in Add testing and scanning to your application. If you are still using the deprecated SCST - Scan 1.0, you must follow these steps and the extra steps required for Scan 1.0 too.
To copy SCST - Store CA certificates from the View cluster, you must copy the Metadata Store CA certificate and the AMR CloudEvent Handler CA certificate from the View cluster.
With your kubectl targeted at the View cluster, you can get Metadata Store’s TLS CA certificate.
MDS_CA_CERT=$(kubectl get secret -n metadata-store ingress-cert -o json | jq -r ".data.\"ca.crt\"" | base64 -d)
With your kubectl targeted at the View cluster, you can get AMR CloudEvent Handler’s TLS CA certificate’s data.
CEH_CA_CERT_DATA=$(kubectl get secret -n metadata-store amr-cloudevent-handler-ingress-cert -o json | jq -r ".data.\"ca.crt\"" | base64 -d)
To copy SCST - Store tokens from the View cluster, you must copy the Metadata Store authentication token and the AMR CloudEvent Handler edit token from the View cluster.
Copy the Metadata Store authentication token into an environment variable:
MDS_AUTH_TOKEN=$(kubectl get secrets metadata-store-read-write-client -n metadata-store -o jsonpath="{.data.token}" | base64 -d)
You use this environment variable in the next step.
Copy the AMR CloudEvent Handler token into an environment variable:
CEH_EDIT_TOKEN=$(kubectl get secrets amr-cloudevent-handler-edit-token -n metadata-store -o jsonpath="{.data.token}" | base64 -d)
You use this environment variable in the next step.
After you copy the certificate and tokens, apply them to the Build and Run clusters before deploying the profiles.
Build cluster:
Run cluster:
You can apply the CloudEvent Handler CA certificate and edit the token to the Build and Run clusters. These values must be accessible during the Build and Run profile deployments.
If you already installed Build Cluster you can skip this step. Create a namespace for the CloudEvent Handler CA certificate and edit token.
kubectl create ns amr-observer-system
Update the Build profile values.yaml
file to add the following snippet. It configures the CA certificate and endpoint. In amr.observer.cloudevent_handler.endpoint
you specify the location of the CloudEvent Handler which was deployed to the View cluster. In amr.observer.ca_cert_data
you paste the contents of $CEH_CA_CERT_DATA
which you copied earlier.
amr:
observer:
auth:
kubernetes_service_accounts:
enable: true
cloudevent_handler:
endpoint: https://amr-cloudevent-handler.<VIEW-CLUSTER-INGRESS-DOMAIN>
ca_cert_data: |
<CONTENTS OF $CEH_CA_CERT_DATA>
Create a secret to store the CloudEvent Handler edit token. This uses the CEH_EDIT_TOKEN
environment variable.
kubectl create secret generic amr-observer-edit-token \
--from-literal=token=$CEH_EDIT_TOKEN -n amr-observer-system
Repeat the earlier steps, but configure kubectl to target the Run cluster instead of the Build cluster.
After all the steps are done, both the Build and Run clusters each have a CloudEvent Handler CA certificate and edit token named amr-observer-edit-token
in the namespaces metadata-store-secrets
and amr-observer-system
. Now you are ready to deploy the Build and Run profiles.
If you came to this topic from Install multicluster Tanzu Application Platform profiles after installing the View profile, return to that topic to install the Build profile and install the Run profile.
Scan 1.0 was deprecated in Tanzu Application Platform v1.10. The default scan component to use in the Test and Scan supply chain is Scan 2.0. These steps are required in addition to the earlier steps if you are still using Scan 1.0. For more information about Scan 1.0 and Scan 2.0, see the SCST - Scan component overview
Within the Build profile values.yaml
file, add the following snippet:
scanning:
metadataStore:
exports:
ca:
pem: |
<CONTENTS OF $MDS_CA_CERT>
auth:
token: <CONTENTS OF $MDS_AUTH_TOKEN>
This snippet contains the content of $MDS_CA_CERT
and $MDS_AUTH_TOKEN
copied in an earlier step. This content configures SCST - Scan with the Metadata Store CA certificate and authentication token.
The Build profile values.yaml
file uses the secrets you created to configure the Grype scanner that communicates with SCST - Store. After performing a vulnerabilities scan, the Grype scanner sends the scan result to SCST - Store.
For example:
...
grype:
targetImagePullSecret: "TARGET-REGISTRY-CREDENTIALS-SECRET"
metadataStore:
url: METADATA-STORE-URL-ON-VIEW-CLUSTER # URL with http / https
caSecret:
name: store-ca-cert
importFromNamespace: metadata-store-secrets # Must match with ingress-cert.data."ca.crt" of store on view cluster
authSecret:
name: store-auth-token # Must match with valid store token of metadata-store on view cluster
importFromNamespace: metadata-store-secrets
...
Where:
METADATA-STORE-URL-ON-VIEW-CLUSTER
is the ingress URL of SCST - Store deployed to the View cluster. For example, https://metadata-store.example.com
. For more information, see Ingress support.TARGET-REGISTRY-CREDENTIALS-SECRET
is the name of the secret that contains the credentials to pull an image from the registry for scanning.SCST - Scan 1.0 required SCST - Store to be configured in every developer namespace with an SCST - Store certificate and authentication token.
To export secrets by creating SecretExport
resources on the developer namespace:
metadata-store-secrets
namespace.Create the SecretExport
resources by running:
cat <<EOF | kubectl apply -f -
---
apiVersion: secretgen.carvel.dev/v1alpha1
kind: SecretExport
metadata:
name: store-ca-cert
namespace: metadata-store-secrets
spec:
toNamespaces: [DEV-NAMESPACES]
---
apiVersion: secretgen.carvel.dev/v1alpha1
kind: SecretExport
metadata:
name: store-auth-token
namespace: metadata-store-secrets
spec:
toNamespaces: [DEV-NAMESPACES]
EOF
Where DEV-NAMESPACES
is an array of developer namespaces where the Metadata Store secrets are exported.
For information about metadata
configuration, see Cluster-specific scanner configurations.
ImportantIn a multicluster configuration, make sure you manually copy the Metadata Store values mentioned earlier from the View cluster to the
values.yaml
file you use to install the Build cluster.