This section describes the procedure to configure VMware Tanzu Kubernetes Grid cluster for a secure Harbor registry deployed with self-signed certificates.
- If you have created a VMware Tanzu Kubernetes Grid workload cluster with VMware Telco Cloud Automation using the v2 cluster management API, then perform the following action from the VMware Telco Cloud Automation UI:
From VMware Telco Cloud Automation, add the secure Harbor registry in Partner Systems by registering to the available Harbor instance. Use
fqdn
with https in URL field. Click the Trust Certificate checkbox. In the VIM Associations tab, select the workload cluster that you use for VMware Telco Cloud Service Assurance deployment. To finish the registration, click the Finish button. For more information on how to add a Harbor registry in Partner Systems, see Add a Harbor Repository in VMware Telco Cloud Automation documentation. - If you have created a VMware Tanzu Kubernetes Grid workload cluster with VMware Telco Cloud Automation using the v1 cluster management API, you must execute the following steps:
- From VMware Telco Cloud Automation, add the secure Harbor registry in Partner Systems by registering to the available Harbor instance. Use
fqdn
with https in URL field. Click the Trust Certificate checkbox. In the VIM Associations tab, select the workload cluster that you use for VMware Telco Cloud Service Assurance deployment. To finish the registration, click the Finish button. For more information on how to add a Harbor registry in Partner Systems, see Add a Harbor Repository in VMware Telco Cloud Automation documentation. - ssh to one of the CPN nodes of the management cluster, use the same credentials provided during VMware Tanzu Kubernetes Grid management cluster deployment using VMware Telco Cloud Automation. You can find the IPs under the Caas Infrastructure page on the left navigation link, select the management cluster from the list, then select the Control Plane Nodes tab, and the Nodes table must list the available CPN nodes.
- After you have logged in, use the following kubectl command to find the kapp-controller instance for the workload cluster that is used for deployment.
capv@small-mgmt-cluster-master-control-plane-nsvtp [ ~ ]$ kubectl get apps -A NAMESPACE NAME DESCRIPTION SINCE-DEPLOY AGE tcsa-test tcsa-test-kapp-controller Reconcile succeeded 27s 2d6h tcsa-xlarge-cluster tcsa-xlarge-cluster-kapp-controller Canceled/paused 23h 26h tkg-system antrea Reconcile succeeded 4m43s 34d tkg-system metrics-server Reconcile succeeded 22s 34d tkg-system tanzu-addons-manager Reconcile succeeded 5m24s 34d tkg-system vsphere-cpi Reconcile succeeded 77s 34d tkg-system vsphere-csi Reconcile succeeded 2m21s 34d
The kapp-controller instance to be updated is listed under the namespace of the same name of the workload cluster and the name of the app instance can be
<workload_cluster_name>-kapp-controller
.After you identify the kapp-controller instance for the workload cluster, edit the configuration by using the following command.kubectl edit app -n <workload_cluster_name> <workload_cluster_name>-kapp-controller
For example:kubectl edit app -n tcsa-xlarge-cluster tcops-xlarge-cluster-kapp-controller
You can edit the following two properties for the values.paused: true syncPeriod: 100000h0s
You can find the properties defined in the following section of the application definition.spec: cluster: kubeconfigSecretRef: key: value name: tcops-xlarge-cluster-kubeconfig deploy: - kapp: rawOptions: - --wait-timeout=30s fetch: - imgpkgBundle: image: projects.registry.vmware.com/tkg/packages/core/kapp-controller:v0.23.0_vmware.1-tkg.1 noopDelete: true paused: true syncPeriod: 5m0s template:
If the paused property is not already defined, then add it to the spec as shown. Save the changes and exit.
- ssh to one of the CPN nodes of the workload cluster, use the same credentials provided during VMware Tanzu Kubernetes Grid workload cluster deployment using VMware Telco Cloud Automation. You can find the IPs under the Caas Infrastructure page on the left navigation link, select the workload cluster from the list, then select the Control Plane Nodes tab, and the Nodes table must list the available CPN nodes. Alternatively, you can use the KUBECONFIG file for the workload cluster to execute kubectl commands against the cluster.
- After you have logged in, use the following kubectl command to find the kapp-controller configuration map instance used by kapp-controller running on the workload cluster.
[root@tcsa ~]$ kubectl get cm -n tkg-system kapp-controller-config -o yaml | head -n 8 apiVersion: v1 data: caCerts: "" dangerousSkipTLSVerify: "" httpProxy: "" httpsProxy: "" noProxy: "" kind: ConfigMap
- Update the configuration map definition to add the certificate information to the {{caCerts}} using the following command:
kubectl edit cm -n tkg-system kapp-controller-config
After the update, caCerts property looks like the following:[root@tcsa ~]$ kubectl get cm -n tkg-system kapp-controller-config -o yaml | head -n 8 apiVersion: v1 data: caCerts: | -----BEGIN CERTIFICATE----- MIIGNDCCBBygAwIBAgIUeB0MR1bIB3wUlnTGoAs3JYUGcXMwDQYJKoZIhvcNAQEN BQAwgYcxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTESMBAGA1UEBwwJUGFsbyBB bHRvMQ4wDAYDVQQKDAVUZWxjbzEdMBsGA1UECwwUU29sdXRpb24gRW5naW5lZXJp bmcxKDAmBgNVBAMMH2hhYXMtd2d0MS05Ny0xMjAuZW5nLnZtd2FyZS5jb20wHhcN <removed some entries> ZQK7iLY80tbbSLuxnyrX1Oaq5U9pYsxjiCEt2XVzgOgfaZKUL6kD9U5LhI8Zj1qY nE3TsevcNE4LH3OXZqjUvpNhfBbMh2u+Ui3wFiwV0prjBQKeg8MCxBQJCVSmb/en q+UD0IwbIlg= -----END CERTIFICATE-----
Note: The caCerts is a yaml file and proper indentation and spacing is required to keep the format of the file valid. The "|" character is the first character after thecaCerts
property name, which denotes a multi-line string. Lastly, there are four spaces of indentation for every line of the certificate string.If you want to add multiple CA certificates to the kapp-controller configuration, you must use the following format:# A cert chain of trusted ca certs. These will be added to the system-wide # cert pool of trusted ca's (optional) caCerts: | -----BEGIN CERTIFICATE----- Certificate 1 -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- Certificate 2 -----END CERTIFICATE-----
- After the configuration map is updated, a restart of the kapp-controller pod is required. Use the following command to restart the pod:
kubectl rollout restart deployment -n tkg-system kapp-controller
After successful restart, you can proceed with the VMware Telco Cloud Service Assurance deployment using the registry information including the CA cert property.For example:REGISTRY_URL=<harbor-registry-fqdn>/<project-name> REGISTRY_USERNAME=<your-registry-username> REGISTRY_PASSWORD=<your-registry-password> REGISTRY_CERTS_PATH=<path-to-Harbor-ca-certificate-file>
Note: The<project-name>
specified in the registry URL will be automatically created with the same name in Harbor registry during deployment.Customization to the deployment of the VMware Tanzu Kubernetes Grid management cluster has to be reverted in the event that the workload cluster must be updated or to perform any type of maintenance from the VMware Telco Cloud Automation Manager UI, including upgrades and cluster lifecycle management.
- From VMware Telco Cloud Automation, add the secure Harbor registry in Partner Systems by registering to the available Harbor instance. Use