This topic explains how to configure and manage secrets used by workload clusters in Tanzu Kubernetes Grid, including:
If the credentials that you use to access vSphere or Azure change, you can update your clusters to use the new credentials. AWS handles credentials differently, so this section only applies to vSphere and Azure.
To update the vSphere credentials used by the current standalone management cluster and by all of its workload clusters, use the tanzu mc credentials update --cascading
command:
Run tanzu context use MGMT-CLUSTER
to log in to the management cluster that you are updating.
Run tanzu mc credentials update
. You can pass values to the following command options or let the CLI prompt you for them:
--vsphere-user
: Name for the vSphere account.--vsphere-password
: Password the vSphere account.--vsphere-thumbprint
: TLS thumbprint of the vCenter Server instance.Update Standalone Management Cluster Credentials Only
To update a standalone management cluster’s vSphere credentials without also updating them for its workload clusters, use the tanzu mc credentials update
command as above, but without the --cascading
option.
Update Workload Cluster Credentials
To update the credentials that a single workload cluster uses to access vSphere, use the tanzu cluster credentials update
command:
Run tanzu context use MGMT-CLUSTER
to log in to the management cluster that created the workload cluster that you are updating. The management cluster can be the Supervisor cluster or the standalone management cluster.
Run tanzu cluster credentials update CLUSTER-NAME
. You can pass values to the following command options or let the CLI prompt you for them:
--namespace
: The namespace of the cluster you are updating credentials for, such as default
.--vsphere-user
: Name for the vSphere account.--vsphere-password
: Password the vSphere account.--vsphere-thumbprint
: TLS thumbprint of the vCenter Server instance.You can also use tanzu mc credentials update --cascading
to update vSphere credentials for a management cluster and all of the workload clusters it manages.
ImportantBefore you begin, obtain the new Azure credentials from the Azure portal or from your Azure administrator.
To update the Azure credentials used by the current standalone management cluster and by all of its workload clusters, use the tanzu mc credentials update --cascading
command:
Run tanzu context use MGMT-CLUSTER
to log in to the management cluster that you are updating.
Run tanzu mc credentials update
. You can pass values to the following command options or let the CLI prompt you for them:
--azure-client-id
: The client ID of the app for Tanzu Kubernetes Grid that you registered in Azure.--azure-client-secret
: The client secret of the app for Tanzu Kubernetes Grid that you registered in Azure.--azure-tenant-id
: The tenant ID for Azure Active Directory in which the app for Tanzu Kubernetes Grid is located.Update Standalone Management Cluster Credentials Only
To update a standalone management cluster’s Azure credentials without also updating them for its workload clusters, use the tanzu mc credentials update
command as above, but without the --cascading
option.
Update Workload Cluster Credentials
To update the credentials that a single workload cluster uses to access Azure, use the tanzu cluster credentials update
command:
Run tanzu context use MGMT-CLUSTER
to log in to the management cluster that created the workload cluster that you are updating.
Run tanzu cluster credentials update CLUSTER-NAME
. You can pass values to the following command options or let the CLI prompt you for them:
--namespace
: The namespace of the cluster you are updating credentials for, such as default
.--azure-client-id
: The client ID of the app for Tanzu Kubernetes Grid that you registered in Azure.--azure-client-secret
: The client secret of the app for Tanzu Kubernetes Grid that you registered in Azure.--azure-tenant-id
: The tenant ID for Azure Active Directory in which the app for Tanzu Kubernetes Grid is located.You can also use tanzu mc credentials update --cascading
to update Azure credentials for a management cluster and all of the workload clusters it manages.
Configure TKG clusters to automatically renew their control plane node VM certificates as follows, depending on the configuration method and cluster type:
Kubernetes-style object spec (class-based clusters):
In your Cluster
object spec, include the following block under spec.topology.variables
:
- name: controlPlaneCertificateRotation.activate
value: true
- name: controlPlaneCertificateRotation.daysBefore
value: EXPIRY-DAYS
Flat cluster configuration file (class-based or legacy clusters):
In your cluster configuration file, include the following settings:
CONTROLPLANE_CERTIFICATE_ROTATION_ENABLED: true
CONTROLPLANE_CERTIFICATE_ROTATION_DAYS_BEFORE: EXPIRY-DAYS
Where EXPIRY-DAYS
is an optional setting for the number of days before the certificate expiration date to automatically renew cluster node certificates. Default: 90; minimum: 7.
Standalone management and their workload clusters use client certificates to authenticate requests. These certificates are valid for one year. To renew them, you can either upgrade your clusters or follow the procedure below. This procedure is intended for cluster certificates that have not expired and are still valid.
Identify the cluster for which you want to renew its certificates. For example:
tanzu cluster list
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN TKR
workload-slot35rp10 default running 3/3 3/3 v1.27.5+vmware.1 <none> prod v1.27.5---vmware.1-tkg.1
To list the cluster nodes, run:
kubectl get nodes -o wide
Check the expiration date for the certificates:
kubectl get nodes \
-o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}' \
-l node-role.kubernetes.io/master= > nodes
for i in `cat nodes`; do
printf "\n######\n"
ssh -o "StrictHostKeyChecking=no" -q capv@$i hostname
ssh -o "StrictHostKeyChecking=no" -q capv@$i sudo kubeadm certs check-expiration
done;
kubectl get nodes \
-o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' \
-l node-role.kubernetes.io/master= > nodes
for i in `cat nodes`; do
printf "\n######\n"
ssh -i aws-cluster-key.pem -o "StrictHostKeyChecking=no" -q ubuntu@$i hostname
ssh -i aws-cluster-key.pem -o "StrictHostKeyChecking=no" -q ubuntu@$i sudo kubeadm certs check-expiration
done;
kubectl get nodes \
-o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' \
-l node-role.kubernetes.io/master= > nodes
for i in `cat nodes`; do
printf "\n######\n"
ssh -i azure-cluster-key.pem -o "StrictHostKeyChecking=no" -q capi@$i hostname
ssh -i azure-cluster-key.pem -o "StrictHostKeyChecking=no" -q capi@$i sudo kubeadm certs check-expiration
done;
Sample output:
######
workload-slot35rp10-control-plane-ggsmj
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0923 17:51:03.686273 4172778 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Sep 21, 2023 23:13 UTC 363d ca no
apiserver Sep 21, 2023 23:13 UTC 363d ca no
apiserver-etcd-client Sep 21, 2023 23:13 UTC 363d etcd-ca no
apiserver-kubelet-client Sep 21, 2023 23:13 UTC 363d ca no
controller-manager.conf Sep 21, 2023 23:13 UTC 363d ca no
etcd-healthcheck-client Sep 21, 2023 23:13 UTC 363d etcd-ca no
etcd-peer Sep 21, 2023 23:13 UTC 363d etcd-ca no
etcd-server Sep 21, 2023 23:13 UTC 363d etcd-ca no
front-proxy-client Sep 21, 2023 23:13 UTC 363d front-proxy-ca no
scheduler.conf Sep 21, 2023 23:13 UTC 363d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Sep 18, 2032 23:09 UTC 9y no
etcd-ca Sep 18, 2032 23:09 UTC 9y no
front-proxy-ca Sep 18, 2032 23:09 UTC 9y no
Set your kubectl
context to the management cluster. For example:
kubectl config use-context mgmt-slot35rp10-admin@mgmt-slot35rp10
Get the name of the KCP object for your target cluster. For example:
kubectl get kcp
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
workload-slot35rp10-control-plane workload-slot35rp10 true true 3 3 3 0 42h v1.27.5+vmware.1
Renew the certificates by triggering a control plane rollout:
kubectl patch kcp workload-slot35rp10-control-plane --type merge -p "{\"spec\":{\"rolloutAfter\":\"`date +'%Y-%m-%dT%TZ'`\"}}
After you run this command, Tanzu Kubernetes Grid starts re-provisioning your cluster machines:
kubectl get machines
workload-slot35rp10-control-plane-7z95k workload-slot35rp10 Provisioning 20s v1.27.5+vmware.1
workload-slot35rp10-control-plane-ggsmj workload-slot35rp10 workload-slot35rp10-control-plane-ggsmj vsphere://4201a86e-3c15-879a-1b85-78f76a16c27f Running 42h v1.27.5+vmware.1
workload-slot35rp10-control-plane-hxbxb workload-slot35rp10 workload-slot35rp10-control-plane-hxbxb vsphere://42014b2e-07e4-216a-24ef-86e2d52d7bbd Running 42h v1.27.5+vmware.1
workload-slot35rp10-control-plane-sm4nw workload-slot35rp10 workload-slot35rp10-control-plane-sm4nw vsphere://4201cff3-2715-ffe1-c4a6-35fc795995ce Running 42h v1.27.5+vmware.1
workload-slot35rp10-md-0-667bcd6b57-79br9 workload-slot35rp10 workload-slot35rp10-md-0-667bcd6b57-79br9 vsphere://420142a2-d141-7d6b-b322-9c2afcc47da5 Running 42h v1.27.5+vmware.1
...
When all of the machines are Running
, verify that the certificate renewal has completed successfully:
Set your kubectl
context to the workload cluster:
kubectl config use-context workload-slot35rp10-admin@workload-slot35rp10
Check the expiration date for the certificates again:
kubectl get nodes \
-o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}' \
-l node-role.kubernetes.io/master= > nodes
for i in `cat nodes`; do
printf "\n######\n"
ssh -o "StrictHostKeyChecking=no" -q capv@$i hostname
ssh -o "StrictHostKeyChecking=no" -q capv@$i sudo kubeadm certs check-expiration
done;
Sample output:
######
workload-slot35rp10-control-plane-4xgw8
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0923 18:10:02.660438 13427 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Sep 23, 2023 18:05 UTC 364d ca no
apiserver Sep 23, 2023 18:05 UTC 364d ca no
apiserver-etcd-client Sep 23, 2023 18:05 UTC 364d etcd-ca no
apiserver-kubelet-client Sep 23, 2023 18:05 UTC 364d ca no
controller-manager.conf Sep 23, 2023 18:05 UTC 364d ca no
etcd-healthcheck-client Sep 23, 2023 18:05 UTC 364d etcd-ca no
etcd-peer Sep 23, 2023 18:05 UTC 364d etcd-ca no
etcd-server Sep 23, 2023 18:05 UTC 364d etcd-ca no
front-proxy-client Sep 23, 2023 18:05 UTC 364d front-proxy-ca no
scheduler.conf Sep 23, 2023 18:05 UTC 364d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Sep 18, 2032 23:09 UTC 9y no
etcd-ca Sep 18, 2032 23:09 UTC 9y no
front-proxy-ca Sep 18, 2032 23:09 UTC 9y no
If the certificate renewal has completed successfully, the Residual Time
column shows 364d
. Certificates on worker nodes are renewed automatically.
To configure a Supervisor-deployed TKG cluster to use a private container registry that is external to TKG, in other words a registry with a self-signed certificate that does not run in a TKG cluster, see the following topics in the vSphere documentation:
In an internet-restricted environment with a standalone management cluster, you can configure TKG_CUSTOM_IMAGE_REPOSITORY_*
variables to give TKG clusters access to a private registry that contains TKG system images to bootstrap from, for example a Harbor VM as described in Deploy an Offline Harbor Registry on vSphere.
The ADDITIONAL_IMAGE_REGISTRY_*
variables configure a new cluster to have trusted communications with additional registries that use self-signed certificate authority (CA) certificates, for example:
containerd
as described in Configure Image Registry in the containerd
repository.How you configure clusters to trust these additional private registries depends on whether the cluster is plan-based or class-based, as described below.
To configure a class-based workload cluster or standalone management cluster with trust for additional custom image registries, set variables as below for up to three additional image registries:
To configure more than three registries, configure the first three as below in step 1 of a two-step process described in Create a Class-Based Cluster, and then follow Make a Class-Based Cluster Trust a Custom Registry below to add more registries to the generated manifest before you create the cluster in step 2.
ADDITIONAL_IMAGE_REGISTRY_1: "OTHER-REGISTRY-1"
ADDITIONAL_IMAGE_REGISTRY_1_SKIP_TLS_VERIFY: false
ADDITIONAL_IMAGE_REGISTRY_1_CA_CERTIFICATE: "CA-BASE64-1"
ADDITIONAL_IMAGE_REGISTRY_2: "OTHER-REGISTRY-2"
ADDITIONAL_IMAGE_REGISTRY_2_SKIP_TLS_VERIFY: false
ADDITIONAL_IMAGE_REGISTRY_2_CA_CERTIFICATE: "CA-BASE64-2"
ADDITIONAL_IMAGE_REGISTRY_3: "OTHER-REGISTRY-3"
ADDITIONAL_IMAGE_REGISTRY_3_SKIP_TLS_VERIFY: false
ADDITIONAL_IMAGE_REGISTRY_3_CA_CERTIFICATE: "CA-BASE64-3"
Where OTHER-REGISTRY-<n>
is the IP address or FQDN of an additional private registry and CA-BASE64-<n>
is its CA certificate in base64-encoded format, without the http://
prefix. This is necessary because this file will be written to disk, so it must be a normal Unix file name.
To enable a new TKC or plan-based cluster to pull images from container registries that use self-signed certificates, you add the custom certificates to the workload cluster nodes by using a ytt
overlay file.
The overlay code below adds custom CA certificates to all nodes in a new cluster. The code works on all target platforms, for clusters based on Photon or Ubuntu VM image templates.
For overlays that customize clusters and create a new cluster plan, see ytt
Overlays. For information about how to download and install ytt
, see Install the Carvel Tools.
Choose whether to apply the custom CA to all new clusters, just the ones created on one cloud infrastructure, or ones created with a specific Cluster API provider version, such as Cluster API Provider vSphere v1.5.1.
In your local ~/.config/tanzu/tkg/providers/
directory, find the ytt
directory that covers your chosen scope. For example, /ytt/03_customizations/
applies to all clusters, and /infrastructure-vsphere/ytt/
applies to all vSphere clusters.
In your chosen ytt
directory, create a new .yaml
file or augment an existing overlay file with the following code:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#! This ytt overlay adds additional custom CA certificates on TKG cluster nodes, so containerd and other tools trust these CA certificates.
#! It works when using Photon or Ubuntu as the TKG node template on all TKG target platforms.
#! Trust your custom CA certificates on all Control Plane nodes.
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
spec:
kubeadmConfigSpec:
#@overlay/match missing_ok=True
files:
#@overlay/append
- content: #@ data.read("tkg-custom-ca.pem")
owner: root:root
permissions: "0644"
path: /etc/ssl/certs/tkg-custom-ca.pem
#@overlay/match missing_ok=True
preKubeadmCommands:
#! For Photon OS
#@overlay/append
- '! which rehash_ca_certificates.sh 2>/dev/null || rehash_ca_certificates.sh'
#! For Ubuntu
#@overlay/append
- '! which update-ca-certificates 2>/dev/null || (mv /etc/ssl/certs/tkg-custom-ca.pem /usr/local/share/ca-certificates/tkg-custom-ca.crt && update-ca-certificates)'
#! Trust your custom CA certificates on all worker nodes.
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"}), expects="1+"
---
spec:
template:
spec:
#@overlay/match missing_ok=True
files:
#@overlay/append
- content: #@ data.read("tkg-custom-ca.pem")
owner: root:root
permissions: "0644"
path: /etc/ssl/certs/tkg-custom-ca.pem
#@overlay/match missing_ok=True
preKubeadmCommands:
#! For Photon OS
#@overlay/append
- '! which rehash_ca_certificates.sh 2>/dev/null || rehash_ca_certificates.sh'
#! For Ubuntu
#@overlay/append
- '! which update-ca-certificates 2>/dev/null || (mv /etc/ssl/certs/tkg-custom-ca.pem /usr/local/share/ca-certificates/tkg-custom-ca.crt && update-ca-certificates)'
In the same ytt
directory, add the Certificate Authority to a new or existing tkg-custom-ca.pem
file.
Before creating the cluster, set the allow-legacy-cluster
feature to true
as described in (Legacy) Create a Plan-Based or a TKC Cluster.
You can enable trusted communication between an existing cluster and additional custom Harbor registries with self-signed CAs, beyond the one set by the TKG_CUSTOM_IMAGE_REPOSITORY_*
configuration variables, for containerd
TLS and other uses. How you do this depends on whether the cluster is plan-based or class-based, as described below.
To add a trusted custom registry to an existing class-based cluster, edit its Cluster
object and add additionalImageRegistries
settings under topology.variables
in the object spec:
topology:
variables:
- name: additionalImageRegistries
value:
- caCert: "CA-BASE64"
host: OTHER-REGISTRY
skipTlsVerify: false
Where:
OTHER-REGISTRY
is the additional private registry location, following the format
10.92.127.192:8443
.
CA-BASE64
is its CA certificate in base64-encoded format, for example LS0tLS1CRU[...]
.To add trust for multiple registries, include multiple additionalImageRegistries
value blocks.
Note that the topology.variables
blocks for imageRepository
and trust
set values from the TKG_CUSTOM_IMAGE_REPOSITORY_*
and TKG_PROXY_CA_CERT
configuration variables.
To enable trust between an existing plan-based cluster and a Harbor registry with a self-signed CA:
Retrieve the Harbor CA certificate:
Switch context to the cluster running Harbor, such as a shared services cluster:
kubectl config use-context SERVICES-CLUSTER-CONTEXT
Where SERVICES-CLUSTER-CONTEXT
is the context of the cluster. For example, tkg-wld-admin@tkg-wld
.
Retrieve the certificate:
kubectl -n tanzu-system-registry get secret harbor-tls -o=jsonpath="{.data.ca\.crt}" | base64 -d
-----BEGIN CERTIFICATE-----
MIIFazCCA1OgAwIBAgIQMfZy08muvIVKdZVDz7/rYzANBgkqhkiG9w0BAQsFADBI
[...]
yiDghW7antzYL9S1CC8sVgVOwFJwfFXpdiir35mQlySG301V4FsRV+Z0cFp4Ni0=
-----END CERTIFICATE-----
Add the custom CA to the standalone management cluster’s kubeadmconfigtemplate
:
Switch context to management cluster.
kubectl config use-context MANAGEMENT-CLUSTER-CONTEXT
Where MANAGEMENT-CLUSTER-CONTEXT
is the context of your management cluster. For example, tkg-mgmt-admin@tkg-mgmt
.
In an editor, open the cluster’s kubeadmconfigtemplate
template file:
kubectl edit kubeadmconfigtemplate CLUSTER-NAME-md-0
Where CLUSTER-NAME
is the name of the cluster that has to be modified.
Change the spec.template.spec.files
section of the file to include the certificate, as shown here:
spec:
template:
spec:
files:
- content: |
-----BEGIN CERTIFICATE-----
MIIFazCCA1OgAwIBAgIQMfZy08muvIVKdZVDz7/rYzANBgkqhkiG9w0BAQsFADBI
[...]
yiDghW7antzYL9S1CC8sVgVOwFJwfFXpdiir35mQlySG301V4FsRV+Z0cFp4Ni0=
-----END CERTIFICATE-----
owner: root:root
path: /etc/ssl/certs/tkg-custom-ca.pem
permissions: "0644"
At the bottom of the file, add a preKubeadmCommands
block as shown here:
preKubeadmCommands:
- '! which rehash_ca_certificates.sh 2>/dev/null || rehash_ca_certificates.sh'
- '! which update-ca-certificates 2>/dev/null || (mv /etc/ssl/certs/tkg-custom-ca.pem
/usr/local/share/ca-certificates/tkg-custom-ca.crt && update-ca-certificates)'
Save the kubeadmconfigtemplate
template file with your changes.
Patch the standalone management cluster with the changes:
kubectl patch machinedeployments.cluster.x-k8s.io tkg-test-md-0 --type merge -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
Running this command triggers a rolling update of the cluster nodes and updates their timestamp.
You can add authentication secrets to enable a cluster to access a private container registry, which requires user authentication to pull images. You can also view, update, or delete the authentication secrets that you have configured for the private registries that are accessed by a cluster.
Using the Tanzu CLI, you can add authentication secrets to access a private container registry from a cluster. After the registry secret is added to the namespaces in your cluster, you can pull all the package repositories, packages, and the container images that are hosted in the private registry. Subsequently, you can add the package repository and package resources to your cluster.
Before performing this procedure, obtain the username and the password for the private container registry.
To add an authentication secret to a private registry, run the following command:
tanzu secret registry add SECRET-NAME -n NAMESPACE --server REGISTRY-URL --username USERNAME --password PASSWORD
Where:
SECRET-NAME
is the name of the registry authentication secret that you want to add.NAMESPACE
is the Tanzu Kubernetes Grid namespace to which the registry belongs.USERNAME
is the username to access the registry. Wrap the username in single quotes if it contains special characters.PASSWORD
the password to access the registry. Wrap the password in single quotes if it contains special characters. You can also specify the password in the following formats:
Replace the --password PASSWORD
string in the command with --password-env-var ENV-VAR
to specify password through the environment variable, which you have configured already. The format of the command is as follows:
tanzu secret registry add SECRET-NAME -n NAMESPACE --server REGISTRY-URL --username USERNAME --password-env-var ENV-VAR
Replace the --password PASSWORD
string in the command with --password-stdin
string to specify password through standard input, and enter the password when prompted. The format of the command is as follows:
tanzu secret registry add SECRET-NAME -n NAMESPACE --server REGISTRY-URL --username USERNAME --password-stdin
Replace the --password PASSWORD
string in the command with --password-file PASSWORD-FILE
string to specify password through a password file. The format of the command is as follows:
tanzu secret registry add SECRET-NAME -n NAMESPACE --server REGISTRY-URL --username USERNAME --password-file PASSWORD-FILE
Optionally, to make the registry secret available across all namespaces in a cluster, use the --export-to-all-namespaces
option as shown in the following format:
tanzu secret registry add SECRET-NAME -n NAMESPACE --server REGISTRY-URL --username USERNAME --password PASSWORD --export-to-all-namespaces
The following is a sample output of this command:
tanzu secret registry add tanzu-net -n test-ns --server projects.registry.vmware.com --username test-user --password-file pass-file --export-to-all-namespaces
Warning: By choosing --export-to-all-namespaces, given secret contents will be available to ALL users in ALL namespaces. Please ensure that included registry credentials allow only read-only access to the registry with minimal necessary scope.
/ Adding registry secret 'test-secret'...
Added registry secret 'test-secret' into namespace 'test-ns'
You can view the registry authentication secrets in the default namespace, or all the namespaces in a cluster. You can view the secrets in the form of a table or a JSON or a YAML file.
To view the registry authentication secrets in a specific namespace in a cluster, run the following:
tanzu secret registry list -n NAMESPACE
Where NAMESPACE
is the Tanzu Kubernetes Grid namespace to which the registry belongs.
The following is an example of this command:
tanzu secret registry list -n test-ns
/ Retrieving registry secrets...
NAME REGISTRY EXPORTED AGE
pkg-dev-reg projects.registry.vmware.com to all namespaces 15d
To view the list of registry authentication secrets in all the namespaces in a cluster, run the following:
tanzu secret registry list -A
The following is an example of this command:
tanzu secret registry list -A
\ Retrieving registry secrets...
NAME REGISTRY EXPORTED AGE NAMESPACE
pkg-dev-reg projects.registry.vmware.com to all namespaces 15d test-ns
tanzu-standard-fetch-0 projects.registry.vmware.com not exported 15d tanzu-package-repo-global
private-repo-fetch-0 projects.registry.vmware.com not exported 15d test-ns
antrea-fetch-0 projects.registry.vmware.com not exported 15d tkg-system
metrics-server-fetch-0 projects.registry.vmware.com not exported 15d tkg-system
tanzu-addons-manager-fetch-0 projects.registry.vmware.com not exported 15d tkg-system
tanzu-core-fetch-0 projects.registry.vmware.com not exported 15d tkg-system
To view the list of registry authentication secrets in the JSON file format, run the following command:
tanzu secret registry list -n kapp-controller-packaging-global -o json
The following is an example of this command:
tanzu secret registry list -n kapp-controller-packaging-global -o json
[
{
"age": "15d",
"exported": "to all namespaces",
"name": "pkg-dev-reg",
"registry": "us-east4-docker.pkg.dev"
}
]
To view the list of registry authentication secrets in the YAML file format, run the following:
tanzu secret registry list -n kapp-controller-packaging-global -o yaml
The following is an example of this command:
tanzu secret registry list -n kapp-controller-packaging-global -o yaml
- age: 15d
exported: to all namespaces
name: pkg-dev-reg
registry: us-east4-docker.pkg.dev
To view the list of registry authentication secrets in a table format, run the following:
tanzu secret registry list -n kapp-controller-packaging-global -o table
The following is an example of this command:
tanzu secret registry list -n kapp-controller-packaging-global -o table
/ Retrieving registry secrets...
NAME REGISTRY EXPORTED AGE
pkg-dev-reg us-east4-docker.pkg.dev to all namespaces 15d
You can update the credentials in a secret, make a secret available across all the namespaces, or make it available in only one namespace in the cluster.
To update the secret in the namespace where it was created, run the following command:
tanzu secret registry update SECRET-NAME --username USERNAME -n NAMESPACE --password PASSWORD
Where:
SECRET-NAME
is the name of the registry secret that you want to update.NAMESPACE
is the Tanzu Kubernetes Grid namespace where you are updating the registry authentication secret.USERNAME
is the new username to access the registry (if you want to update the username).PASSWORD
is the new password for the registry (if you want to update the password).NoteYou can update either the username or the password, or both of them.
The following is an example of this command:
tanzu secret registry update test-secret --username test-user -n test-ns --password-env-var PASSENV
\ Updating registry secret 'test-secret'...
Updated registry secret 'test-secret' in namespace 'test-ns'
To update the registry authentication secret and make it available in other namespaces in the cluster, run the following command:
tanzu secret registry update SECRET-NAME --username USERNAME -n NAMESPACE --password PASSWORD --export-to-all-namespaces=true
Where:
SECRET-NAME
is the name of the registry secret that you want to update.NAMESPACE
is the Tanzu Kubernetes Grid namespace where you are updating the registry authentication secret.USERNAME
is the username to access the registry. Enter a new username if you want to update the username.PASSWORD
is the password for the registry. Enter a new password if you want to update the password.The following is an example of this command:
tanzu secret registry update test-secret--username test-user -n test-ns --password-env-var PASSENV --export-to-all-namespaces=true
Warning: By specifying --export-to-all-namespaces as true, given secret contents will be available to ALL users in ALL namespaces. Please ensure that included registry credentials allow only read-only access to the registry with minimal necessary scope.
Are you sure you want to proceed? [y/N]: y
\ Updating registry secret 'test-secret'...
Updated registry secret 'test-secret' in namespace 'test-ns'
Exported registry secret 'test-secret' to all namespaces
To make a registry authentication secret unavailable in other namespaces in the cluster, run the following command:
tanzu secret registry update SECRET-NAME --username USERNAME -n NAMESPACE --password PASSWORD --export-to-all-namespaces=false
Where:
SECRET-NAME
is the name of the registry secret that you want to update.NAMESPACE
is the Tanzu Kubernetes Grid namespace where you are updating the registry authentication secret.USERNAME
is the username to access the registry.PASSWORD
is password for the registry.The following is an example of this command:
tanzu secret registry update test-secret --username test-user -n test-ns --password-env-var PASSENV --export-to-all-namespaces=false
Warning: By specifying --export-to-all-namespaces as false, the secret contents will get unexported from ALL namespaces in which it was previously available to.
Are you sure you want to proceed? [y/N]: y
\ Updating registry secret 'test-secret'...
Updated registry secret 'test-secret' in namespace 'test-ns'
Unexported registry secret 'test-secret' from all namespaces
Using the Tanzu CLI, you can delete a registry authentication secret in a cluster. To delete a registry authentication secret in a specific namespace, run the following command:
tanzu secret registry delete SECRET-NAME -n NAMESPACE
Where:
SECRET-NAME
is the name of the registry secret that you want to delete.NAMESPACE
is the Tanzu Kubernetes Grid namespace from where you want to delete the registry authentication secret. If you do not specify a namespace, the authentication secret is deleted from the default namespace. If the secret had been exported to other namespaces in the cluster, it is also deleted.The following is an example of this command:
tanzu secret registry delete test-secret -n test-ns
Deleting registry secret 'test-secret' from namespace 'test-ns'. Are you sure? [y/N]: y
\ Deleting registry secret 'test-secret'...
Deleted registry secret 'test-secret' from namespace 'test-ns'