You can use an external container registry with Tanzu Kubernetes cluster pods. This is an alternative to using the embedded Harbor Registry.

External Private Registry Use Case

Container registries provide a critical function for Kubernetes deployments, serving as a centralized repository for storing and sharing container images. The most commonly used public container registry is DockerHub. There are many private container registry offerings. VMware Harbor is an open-source, cloud native, private container registry. vSphere with Tanzu embeds an instance of Harbor that you can use as the private container registry for vSphere Pods and for pods running on Tanzu Kubernetes clusters. For more information, see Enable the Embedded Harbor Registry on the Supervisor Cluster.

The embedded Harbor Registry that ships with vSphere with Tanzu requires NSX-T networking. If you are using vSphere networking, you cannot use it. In addition, you may already be running your own private container registry that you want to integrate with your Tanzu Kubernetes clusters. In this case you can configure the Tanzu Kubernetes Grid Service to trust private registries with self-signed certificates, thereby allowing Kubernetes pods running on Tanzu Kubernetes clusters to use the external registry.

External Private Registry Requirements

To use an external private registry with Tanzu Kubernetes clusters, you must use vSphere with Tanzu version 7 U2 or later.

You can only use your own private registry with Kubernetes pods that run on Tanzu Kubernetes clusters and Tanzu Kubernetes release node VMs. You cannot use your own private registry with vSphere Pods that run natively on ESXi hosts. The supported registry for vSphere Pods is the Harbor Registry that is embedded in the vSphere with Tanzu platform.

Once you configure the Tanzu Kubernetes Grid Service for a private registry, any new cluster that is provisioned will support the private registry. For existing clusters to support the private registry, a rolling update is required to apply the TkgServiceConfiguration. See Update Tanzu Kubernetes Clusters. In addition, the first time you create a custom TkgServiceConfiguration, the system will initiate a rolling update.

External Private Registry Configuration

To use your own private registry with Tanzu Kubernetes clusters, you configure the Tanzu Kubernetes Grid Service with one or more self-signed certificates to serve private registry content over HTTPS.

The TkgServiceConfiguration is updated to support self-signed certificates for the private registry. Specifically, a new trust section with the additionalTrustedCAs field is added, allowing you to define any number of self-signed certificates that Tanzu Kubernetes clusters should trust. This functionality lets you easily define a list of certificates, and update those certificates should they need rotation.

Once the trust.additionalTrustedCAs fields are added, the TLS certificates will be applied for each new cluster that is created. In other words, applying an update to TkgServiceConfiguration.trust.additionalTrustedCAs does not trigger an automatic rolling update of existing Tanzu Kubernetes clusters.

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TkgServiceConfiguration
metadata:
  name: tkg-service-configuration
spec:
  defaultCNI: antrea
  trust:
    additionalTrustedCAs:
      - name: first-cert-name
        data: base64-encoded string of a PEM encoded public cert 1
      - name: second-cert-name
        data: base64-encoded string of a PEM encoded public cert 2
To apply the update, run the following command.
kubectl apply -f tkgserviceconfiguration.yaml

Because the Tanzu Kubernetes Grid Service specification is being updated with the private registry certificates, you do not need to add the public key to the Tanzu Kubernetes cluster kubeconfig as you do when using the embedded Harbor Registry with Tanzu Kubernetes clusters.

Configure a Tanzu Kubernetes Workload to Pull Images from a Private Container Registry

To pull images from a private container registry for a Tanzu Kubernetes cluster workload, configure the workload YAML with the private registry details.

This procedure can be used to pull images from a private container registry, or the embedded Harbor Registry. In this example, we create a pod specification that will use an image stored in the embedded Harbor Registry and utilize the image pull secret previously configured.
  1. Create an example pod spec with the details about the private registry.
    apiVersion: v1
    kind: Pod
    metadata:
      name: <workload-name>
      namespace: <kubernetes-namespace>
    spec:
      containers:
      - name: private-reg-container
        image: <Registry-IP-Address>/<vsphere-namespace>/<image-name>:<version>
      imagePullSecrets:
      - name: <registry-secret-name>
    • Replace <workload-name> with the name of the pod workload.
    • Replace <kubernetes-namespace> with the Kubernetes namespace in the cluster where the pod will be created. This must be the same Kubernetes namespace where the Registry Service image pull secret is stored in the Tanzu Kubernetes cluster (such as the default namespace).
    • Replace <Registry-IP-Address> with the IP address for the embedded Harbor Registry instance running on the Supervisor Cluster.
    • Replace <vsphere-namespace> with the vSphere Namespace where the target Tanzu Kubernetes is provisioned.
    • Replace <image-name> with an image name of your choice.
    • Replace <version> with an appropriate version of the image, such as "latest".
    • Replace <registry-secret-name> with the name of the Registry Service image pull secret that you created previously.
  2. Create a workload in the Tanzu Kubernetes cluster based on the pod specification you defined.
    kubectl --kubeconfig=<path>/cluster-kubeconfig apply -f <pod.yaml>

    The pod should be created from the image pulled from the registry.

Trust Fields for External Private Registries

Add a certificate entry (base64-encoded string of a PEM-encoded public certificate) to the additionalTrustedCAs section in the TkgServiceConfiguration. The data is public certificates stored in plain text in the TkgServiceConfiguration.

Table 1. Trust Fields for Private Registries
Field Description
trust Section marker. Accepts no data.
additionalTrustedCAs Section marker. Includes a array of certificates with name and data for each.
name The name of the TLS certificate.
data The base64-encoded string of a PEM encoded public certificate.

Removing External Private Registry Certificates

Remove a certificate from the list of certificates in the additionalTrustedCAs section in the TkgServiceConfiguration and apply the TkgServiceConfiguration to the Tanzu Kubernetes Grid Service.

Rotating External Private Registry Certificates

Changing the TkgServiceConfiguration does not trigger an automatic rolling update of existing Tanzu Kubernetes clusters. But, when a rolling update of a cluster is triggered (by manually changing the cluster specification or as a byproduct of an upgrade), then changes made to the TkgServiceConfiguration will be applied.

Thus, to rotate the TLS certificates, a VI Admin or DevOps Engineer can change the contents of the certificate fields in the additionalTrustedCAs section of the TkgServiceConfiguration. The next time a rolling update is triggered, the changes will be applied. Alternatively, the TLS certificates can be edited directly in the Tanzu Kubernetes cluster specification itself and applied, thereby triggering a rolling update.

Troubleshooting External Private Registry Certificates

If you configure the Tanzu Kubernetes Grid Service with the certificates to trust, and you add the self-signed certificate to the cluster kubeconfig, you should be able to successfully pull a container image from a private registry that uses that self-signed certificate.

The following command can help you determine if the container image has been successfully pulled for a Pod workload:

kubectl describe pod PODNAME

This commands shows detailed status and error messages for a given pod. An example of attempting to pull an image before adding custom certificates to the cluster:

Events:
  Type     Reason                        Age               From               Message
  ----     ------                        ----              ----               -------
  Normal   Scheduled                     33s               default-scheduler  ...
  Normal   Image                         32s               image-controller   ...
  Normal   Image                         15s               image-controller   ...
  Normal   SuccessfulRealizeNSXResource  7s (x4 over 31s)  nsx-container-ncp  ...
  Normal   Pulling                       7s                kubelet            Waiting test-gc-e2e-demo-ns/testimage-8862e32f68d66f727d1baf13f7eddef5a5e64bbd-v10612
  Warning  Failed                        4s                kubelet            failed to get images: ... Error: ... x509: certificate signed by unknown authority
And, when running the following command:
kubectl get pods
The ErrImagePull error is also visible in the overall Pod status view:
NAME                                         READY   STATUS         RESTARTS   AGE
testimage-nginx-deployment-89d4fcff8-2d9pz   0/1     Pending        0          17s
testimage-nginx-deployment-89d4fcff8-7kp9d   0/1     ErrImagePull   0          79s
testimage-nginx-deployment-89d4fcff8-7mpkj   0/1     Pending        0          21s
testimage-nginx-deployment-89d4fcff8-fszth   0/1     ErrImagePull   0          50s
testimage-nginx-deployment-89d4fcff8-sjnjw   0/1     ErrImagePull   0          48s
testimage-nginx-deployment-89d4fcff8-xr5kg   0/1     ErrImagePull   0          79s
The errors “x509: certificate signed by unknown authority” and “ErrImagePull” indicate that cluster is not configured with the correct certificate to connect to the private container registry. Either the certificate is missing, or it is misconfigured.

If you are experiencing errors connecting to a private registry after configuring the certificates, you can check if certificates applied in the configuration are applied to the cluster. You can check whether the certificates were applied in their configuration have been applied properly by using SSH.

Two investigative steps can be done by connecting to a worker node over SSH.
  1. Check the folder /etc/ssl/certs/ for files named tkg-<cert_name>.pem, where <cert_name> is the "name" property of the certificate added in the TkgServiceConfiguration. If the certificates match what is in the TkgServiceConfiguration, and using a private registry still does not work, diagnose further by completing the next step.
  2. Run the following openssl connection test to the target server using self-signed certificates by executing the command openssl s_client -connect hostname:port_num, where hostname is the host name/DNS name of the private registry that is using self-signed certificates, and port_num is the port number that the service is running on (usually 443 for HTTPS). You can check exactly what error is being returned by openssl when attempting to connect to the endpoint that is using self-signed certificates, and remedy the situation from there, for example, by adding the right certificates to the TkgServiceConfiguration. If the Tanzu Kubernetes cluster is embedded with the wrong certificate, you will need to update the Tanzu Kubernetes Grid Service configuration with the correct certificates, delete the Tanzu Kubernetes cluster, then recreate it using the configuration that contains the correct certificates.