IMPORTANT: The Tanzu Kubernetes Grid Connectivity API has been removed in TKG v1.3. It is strongly recommended to upgrade to Tanzu Kubernetes Grid v1.3 before deploying Harbor Registry as a Shared Service, to avoid using the Connectivity API.

Harbor is an open source, trusted, cloud native container registry that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity control and management.

Tanzu Kubernetes Grid includes signed binaries for Harbor, that you can deploy on a shared services cluster to provide container registry services for other Tanzu Kubernetes clusters. Unlike Tanzu Kubernetes Grid extensions, which you use to deploy services on individual clusters, you deploy Harbor as a shared service. In this way, Harbor is available to all of the Tanzu Kubernetes clusters in a given Tanzu Kubernetes Grid instance. To implement Harbor as a shared service, you deploy it on a special cluster that is reserved for running shared services in a Tanzu Kubernetes Grid instance.

You can use the Harbor shared service as a private registry for images that you want to make available to all of the Tanzu Kubernetes clusters that you deploy from a given management cluster. An advantage to using the Harbor shared service is that it is managed by Kubernetes, so it provides greater reliability than a standalone registry. Also, the Harbor implementation that Tanzu Kubernetes Grid provides as a shared service has been tested for use with Tanzu Kubernetes Grid and is fully supported.

The procedures in this topic all apply to vSphere, Amazon EC2, and Azure deployments.

Using the Harbor Shared Service in Internet-Restricted Environments

Another use-case for deploying Harbor as a shared service is for Tanzu Kubernetes Grid deployments in Internet-restricted environments. For more information, see Using the Harbor Shared Service in Internet-Restricted Environments.

Prerequisites

IMPORTANT: The extensions folder tkg-extensions-v1.2.0+vmware.1 contains subfolders for each type of extension, for example, authentication, ingress, registry, and so on. At the top level of the folder there is an additional subfolder named extensions. The extensions folder also contains subfolders for authentication, ingress, registry, and so on. Take care to run commands from the location provided in the instructions. Commands are usually run from within the extensions folder.

Prepare a Shared Services Cluster for Harbor Deployment

Each Tanzu Kubernetes Grid instance can only have one shared services cluster. You must deploy the Harbor extension on a cluster that will only be used for shared services.

To prepare a shared services cluster on which to run the Harbor extension:

  1. Deploy a Tanzu Kubernetes cluster to use for shared services.

    For example, deploy a cluster named tkg-services. Because the cluster will provide services to all of the other clusters in the instance, it is recommended to use the prod cluster plan rather than the dev plan.

    vSphere:

    tkg create cluster tkg-services --plan prod --vsphere-controlplane-endpoint <WORKLOAD_CLUSTER_IP_OR_FQDN>
    

    Replace <WORKLOAD_CLUSTER_IP_OR_FQDN> with a static virtual IP (VIP) address for the control plane of the the shared services cluster. Ensure that this IP address is not in the DHCP range, but is in the same subnet as the DHCP range. If you mapped a fully qualified domain name (FQDN) to the VIP address, you can specify the FQDN instead of the VIP address.

    Amazon EC2 and Azure:

    tkg create cluster tkg-services --plan prod
    

    Throughout the rest of these procedures, the cluster that you just deployed is referred to as the shared services cluster.

  2. In a terminal, navigate to the folder that contains the unpacked Tanzu Kubernetes Grid extension manifest files, tkg-extensions-v1.2.0+vmware.1/extensions.

    cd <path>/tkg-extensions-v1.2.0+vmware.1/extensions
    

    You should see folders for authentication, ingress, logging, monitoring, registry, and some YAML files. Run all of the commands in this procedure from this location.

  3. Get the credentials of the shared services cluster on which to deploy Harbor.

    tkg get credentials tkg-services
    
  4. Set the context of kubectl to the shared services cluster.

    kubectl config use-context tkg-services-admin@tkg-services
    
  5. If you haven't already, install needed components on the shared services cluster by following the procedure Installing Extension Prerequisite Components to a Cluster.

  6. Deploy the Contour extension on the shared services cluster.

    The Harbor extension requires the Contour extension to be present on the cluster, to provide ingress control. For information about deploying the Contour extension, see Implementing Ingress Control with Contour.

Your shared services cluster is now ready for you to deploy the Harbor extension.

Deploy Harbor Extension on the Shared Services Cluster

After you have deployed a shared services cluster and deployed the Contour extension on it, you can deploy the Harbor extension.

  1. Create a namespace for the Harbor extension on the shared services cluster.

    kubectl apply -f registry/harbor/namespace-role.yaml
    

    You should see confirmation that a tanzu-system-registry namespace, service account, and RBAC role bindings are created.

    namespace/tanzu-system-registry created
    serviceaccount/harbor-extension-sa created
    role.rbac.authorization.k8s.io/harbor-extension-role created
    rolebinding.rbac.authorization.k8s.io/harbor-extension-rolebinding created
    clusterrole.rbac.authorization.k8s.io/harbor-extension-cluster-role created
    clusterrolebinding.rbac.authorization.k8s.io/harbor-extension-cluster-rolebinding created
    
  2. Make a copy of the harbor-data-values.yaml.example file and name it harbor-data-values.yaml.

    cp registry/harbor/harbor-data-values.yaml.example registry/harbor/harbor-data-values.yaml
    
  3. Set the mandatory passwords and secrets in harbor-data-values.yaml.

    You can do this in one of two ways:

    • To automatically generate random passwords and secrets, run the following command:
      bash registry/harbor/generate-passwords.sh registry/harbor/harbor-data-values.yaml
      
    • To manually set your own passwords and secrets, update the following entries in harbor-data-values.yaml:
      • harborAdminPassword
      • secretKey
      • database.password
      • core.secret
      • core.xsrfKey
      • jobservice.secret
      • registry.secret
  4. Specify other settings in harbor-data-values.yaml.

    • Set the hostname setting to the hostname you want to use to access Harbor. For example: harbor.system.tanzu.
    • To use your own certificates, update the tls.crt, tls.key, and ca.crt settings with the contents of your certificate, key, and CA certificate. The certificate can be signed by a trusted authority or be self-signed. If you leave these blank, Tanzu Kubernetes Grid automatically generates a self-signed certificate.
    • If you used the generate-passwords.sh script, optionally update the harborAdminPassword with something that is easier to remember.
    • Optionally update the persistence settings to specify how Harbor stores data.

      If you need to store a large quantity of container images in Harbor, set persistence.persistentVolumeClaim.registry.size to a larger number.

      If you do not update the storageClass under persistence settings, Harbor uses the shared services cluster's default storageClass. If the default storageClass or a storageClass that you specify in harbor-data-values.yaml supports the accessMode ReadWriteMany, you must update the persistence.persistentVolumeClaim accessMode settings for registry, jobservice, database, redis, and trivy from ReadWriteOnce to ReadWriteMany. vSphere 7 with VMware vSAN 7 supports accessMode: ReadWriteMany but vSphere 6.7u3 does not. If you are using vSphere 7 without vSAN, or you are using vSphere 6.7u3, use the default value ReadWriteOnce.

    • Optionally update the other Harbor settings. The settings that are available in harbor-data-values.yaml are a subset of the settings that you set when deploying open source Harbor with Helm. For information about the other settings that you can configure, see Deploying Harbor with High Availability via Helm in the Harbor documentation.

  5. Create a Kubernetes secret named harbor-data-values with the values that you set in harbor-data-values.yaml.

    kubectl create secret generic harbor-data-values --from-file=values.yaml=registry/harbor/harbor-data-values.yaml -n tanzu-system-registry
    
  6. Deploy the Harbor extension.

    kubectl apply -f registry/harbor/harbor-extension.yaml
    

    You should see the confirmation extension.clusters.tmc.cloud.vmware.com/harbor created.

  7. View the status of the Harbor extension.

    kubectl get extension harbor -n tanzu-system-registry
    

    You should see information about the Harbor extension.

    NAME     STATE   HEALTH   VERSION
    harbor   3
    
  8. View the status of the Harbor service itself.

    kubectl get app harbor -n tanzu-system-registry
    

    The status of the Harbor app should show Reconcile Succeeded when the Harbor extension has been deployed successfully.

    NAME     DESCRIPTION           SINCE-DEPLOY   AGE
    harbor   Reconcile succeeded   111s           5m11s
    
  9. If the status is not Reconcile succeeded, view the full status details of the Harbor service.

    Viewing the full status can help you to troubleshoot the problem.

    kubectl get app harbor -n tanzu-system-registry -o yaml
    
  10. Check that the new services are running by listing all of the pods that are running in the cluster.

    kubectl get pods -A
    

    In the tanzu-system-regisry namespace, you should see the harbor core, clair, database, jobservice, notary, portal, redis, registry, and trivy services running in a pod with names similar to harbor-registry-76b6ccbc75-vj4jv.

    NAMESPACE                 NAME                                                               READY   STATUS    RESTARTS   AGE
    [...]
    tanzu-system-ingress      contour-59687cc5ff-7mjdr                                           1/1     Running   0          1h
    tanzu-system-ingress      contour-59687cc5ff-dwv7t                                           1/1     Running   0          1h
    tanzu-system-ingress      envoy-pjwhm                                                        1/2     Running   0          1h
    tanzu-system-registry     harbor-clair-7d87449c88-xmsgr                                      2/2     Running   1          5m54s
    tanzu-system-registry     harbor-core-845b69754d-flc8k                                       1/1     Running   0          5m54s
    tanzu-system-registry     harbor-database-0                                                  1/1     Running   0          5m53s
    tanzu-system-registry     harbor-jobservice-864dfd7f57-bx8bq                                 1/1     Running   0          5m53s
    tanzu-system-registry     harbor-notary-server-5b959686cd-68sg5                              1/1     Running   2          5m53s
    tanzu-system-registry     harbor-notary-signer-56bcf9846b-g6q99                              1/1     Running   2          5m52s
    tanzu-system-registry     harbor-portal-84dd9c64-fhmh4                                       1/1     Running   0          5m52s
    tanzu-system-registry     harbor-redis-0                                                     1/1     Running   0          5m52s
    tanzu-system-registry     harbor-registry-76b6ccbc75-vj4jv                                   2/2     Running   0          5m52s
    tanzu-system-registry     harbor-trivy-0                                                     1/1     Running   0          5m51s
    vmware-system-tmc         extension-manager-d7cc7fcbb-lw4q4                                  1/1     Running   0          1h
    vmware-system-tmc         kapp-controller-7c98dff676-jm77p                                   1/1     Running   0          1h
    
  11. Obtain the Harbor CA certificate from the harbor-tls secret in the tanzu-system-registry namespace.

    kubectl -n tanzu-system-registry get secret harbor-tls -o=jsonpath="{.data.ca\.crt}" | base64 -d
    

    Make a copy of the output. You will need this certificate in Install the Tanzu Kubernetes Grid Connectivity API on the Management Cluster.

The Harbor extension is running in the shared services cluster. You must now apply a label to this cluster to identify it as a provider of a shared service.

Identify the Cluster as a Shared Service Cluster

After you have deployed Harbor on the shared services cluster, you must apply the tanzu-services label to it, to inform the management cluster and other Tanzu Kubernetes clusters that this is the shared services cluster for the Tanzu Kubernetes Grid instance.

  1. Set the context of kubectl to the context of your management cluster.

    For example, if your cluster is named mgmt-cluster, run the following command.

    kubectl config use-context mgmt-cluster-admin@mgmt-cluster
    
  2. Set a cluster-role label on the shared services cluster to mark this Tanzu Kubernetes cluster as a shared services cluster.

    kubectl label cluster.cluster.x-k8s.io/tkg-services cluster-role.tkg.tanzu.vmware.com/tanzu-services="" --overwrite=true
    

    You should see the confirmation cluster.cluster.x-k8s.io/tkg-services labeled.

  3. Check that the label has been correctly applied by running the tkg get cluster command.

    tkg get cluster --include-management-cluster
    

    You should see that the tkg-services cluster has the tanzu-services role.

     NAME              NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES
     another-cluster   default     running  1/1           1/1      v1.19.3+vmware.1  <none>
     tkg-services      default     running  3/3           3/3      v1.19.3+vmware.1  tanzu-services
     mgmt-cluster      tkg-system  running  3/3           3/3      v1.19.3+vmware.1  management
    

The management cluster and the other Tanzu Kubernetes clusters that it manages now identify this cluster as the provider of shared services for the Tanzu Kubernetes Grid instance.

Install the Tanzu Kubernetes Grid Connectivity API on the Management Cluster

Before you can use the Harbor instance that you have deployed as a shared service, you must ensure that you can connect to it from the Kubernetes nodes running on the Tanzu Kubernetes clusters. To do this, you can deploy the Tanzu Kubernetes Grid Connectivity API on the management cluster. The connectivity API connects Kubernetes nodes that are running in Tanzu Kubernetes clusters to the Harbor shared service. It also injects the Harbor CA certificate and other connectivity API related configuration into Tanzu Kubernetes cluster nodes.

  1. On the system that you use as the bootstrap machine, go to https://www.vmware.com/go/get-tkg and log in with your My VMware credentials.
  2. Under Product Downloads, click Go to Downloads.
  3. Scroll to tkg-connectivity-manifests-v1.2.0-vmware.2.tar.gz under VMware Tanzu Kubernetes Grid Manifests and click Download Now.
  4. Use either the tar command or the extraction tool of your choice to unpack the bundle of files for the Tanzu Kubernetes Grid connectivity API.

    tar -xzf tkg-connectivity-manifests-v1.2.0-vmware.2.tar.gz

    For convenience, unpack the bundle in the same location as the one from which you run tkg and kubectl commands. The bundle unpacks to a folder named manifests.

  5. Open the file manifests/tanzu-registry/values.yaml in a text editor.

  6. Update values.yaml with information about your Harbor deployment and how clusters should connect to it.

    • registry.enabled: If set to true, connectivity to Harbor is configured in all node machines in the Tanzu Kubernetes Grid instance by enabling the tanzu-registry-webhook. If not specified or set to false, connectivity is not configured, for example for debugging purposes.
    • registry.dnsOverride:
      • If set to true, the DNS entry for the Harbor FQDN is overwritten by injecting entries into /etc/hosts of every node machine. This is helpful when you cannot resolve the Harbor FQDN from the node machines. When set to true the registry.vip cannot be empty.
      • If not specified or set to false, the DNS entry for the Harbor FQDN is used, and is expected to be resolvable from the node machines.
    • registry.fqdn: Set the FQDN to harbor.system.tanzu, or whatever FQDN you specified as the hostname in the harbor-data-values.yaml file in Deploy Harbor on the Shared Services Cluster.
    • registry.vip: Set an IP address to use to connect to the shared services cluster. If you set registry.bootstrapProxy to true, registry.vip can be any address that does not conflict with other addresses in your environment, for example 1.2.3.4. If you set registry.bootstrapProxy to false, registry.vip should be a routable address in the network, for example when you have an existing Load Balancer VIP in front of Harbor.
    • registry.bootstrapProxy:
      • If set to true, iptable rules are added that proxy connections to the VIP to the Harbor cluster. The iptable rules are created only for bootstrapping, when the kube-xx images are pulled from the registry during cluster creation. The iptable rules persist for as long as the cluster is running, so make sure the registry.vip is not in conflict with any IP ranges that will be used in your environment, for example, a VIP range that is allocated for load balancers.
      • If set to false, the IP address specified in registry.vip should be a routable address in the network, for example when you have an existing Load Balancer VIP in front of Harbor.
    • registry.rootCA: Paste in the contents of the CA of the Harbor service, that you obtained in Deploy Harbor on the Tanzu Kubernetes Cluster. Make sure that the CA contents are indented by exactly 4 spaces.

    For example:

    #@data/values
    ---
    image: registry.tkg.vmware.run/tkg-connectivity/tanzu-registry-webhook:v1.2.0_vmware.2
    imagePullPolicy: IfNotPresent
    registry:
      enabled: "true"
      dnsOverride: "false"
      fqdn: "harbor.system.tanzu"
      vip: "1.2.3.4"
      bootstrapProxy: "true"
      rootCA: |
        -----BEGIN CERTIFICATE-----
        MIIDOzCCAiOgAwIBAgIQYON5MUM9BV9KBMLGkPUv1TANBgkqhkiG9w0BAQsFADAt
        MRcwFQYDVQQKEw5Qcm9qZWN0IEhhcmJvcjESMBAGA1UEAxMJSGFyYm9yIENBMB4X
        DTIwMTAxNDE2MDUzN1oXDTMwMTAxMjE2MDUzN1owLTEXMBUGA1UEChMOUHJvamVj
        dCBIYXJib3IxEjAQBgNVBAMTCUhhcmJvciBDQTCCASIwDQYJKoZIhvcNAQEBBQAD
        ggEPADCCAQoCggEBAMyo6F/yYjIcHpriLYFWwkD0/oRO2rAObVBB5TguG3wrk1mk
        lT0nRFX0RPLCRhnTKbZusQERchkL1fcT6utPHmG+Aqv7MfnZB6Dpm8kbodl3REZ5
        Je5JPCtGH8pwMBPd5YcHWltUefaEExccjngtdEjRMx8So75FQQVlkF5Q7m+++FZG
        iSTpeX/3nUd3CiIYMxTBqgr32tDXQV2EKs+JxG2AVqq3s7AQkXCmCDsRGsYKDzC9
        NYGpcHMLqxAnXxi0RE/e6rXVo9MxjSfRFs0zBcUpL3wv7aAclwSegiYzhLFgonvU
        gfLLkKZKUxVKtW0+lBV7VSfwEq8i7qDYduwu+9MCAwEAAaNXMFUwDgYDVR0PAQH/
        BAQDAgIEMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAPBgNVHRMBAf8E
        BTADAQH/MBMGA1UdEQQMMAqCCGhhcmJvcmNhMA0GCSqGSIb3DQEBCwUAA4IBAQC9
        G18HUVoLcXtaKHpVgOlS0ESxDE6kqIF9bP0Jt0+6JpoYBDXWfYqq2jWGAxA/mA0L
        PNvkMMQ/cMNAdWCdRgqd/SqtXyhUXpYfO/4NAyCB0DADNwknYrq1wiTUCxFrCDZY
        fhwUoSW1m7T41feZ1cxpn8j9nHIvIYfFwMXQzm7qBqj4Lu0Ocl3LQbvXHqcuRMFz
        9/w+GwBfrlgelZlZN+f+Kar2C7Izz2HXxyhl+OhqjDpZOQKofSwpetp/bNqzYd61
        wzI/VC1mWvdP1Z28yuvpnOvosEf7tvEvCY2A4Rh/mukFg5wMKknh/KwW7Hsc2L7n
        O/dBaDrBpUksTNXf66wg
        -----END CERTIFICATE-----
    
  7. In a terminal, navigate to the location that contains the manifests folder.

  8. Run the following command to install the tanzu-registry-webhook on the management cluster. This webhook is bundled with the Tanzu Kubernetes Grid connectivity API components.

    Make sure that the context of kubectl is still set to the management cluster, then run the following command.

    ytt --ignore-unknown-comments -f manifests/tanzu-registry | kubectl apply -f
    

    You should see confirmation of the creation of the tanzu-system-registry namespace, the tanzu-registry-webhook, services, service accounts, and role bindings on the management cluster.

    namespace/tanzu-system-registry created
    issuer.cert-manager.io/tanzu-registry-webhook created
    certificate.cert-manager.io/tanzu-registry-webhook-certs created
    configmap/tanzu-registry-configuration created
    service/tanzu-registry-webhook created
    serviceaccount/tanzu-registry-webhook created
    clusterrolebinding.rbac.authorization.k8s.io/tanzu-registry-webhook created
    clusterrole.rbac.authorization.k8s.io/tanzu-registry-webhook created
    deployment.apps/tanzu-registry-webhook created
    mutatingwebhookconfiguration.admissionregistration.k8s.io/tanzu-registry-webhook created
    

    If you already have a routable Load Balancer VIP to access to Harbor, you don't need the full TKG connectivity API component installed, the only part you need to deploy is this tanzu-registry-webhook, to ensure the Harbor CA certificate is injected, or to injecting entries into /etc/hosts of every node machine if you need to override the DNS. In this case you can jump ahead to Connect to the Harbor User Interface and skip the next steps that install the TKG connectivity operator. The labels and annotations on the Harbor httpproxy are also not required in this scenario.

  9. Deploy the Tanzu Kubernetes Grid connectivity operator on the management cluster.

    ytt -f manifests/tkg-connectivity-operator | kubectl apply -f -
    

    You should see confirmation of the creation of the tanzu-system-connectivity namespace and the tanzu-registry-operator service operator.

    namespace/tanzu-system-connectivity created
    serviceaccount/tkg-connectivity-operator created
    deployment.apps/tkg-connectivity-operator created
    clusterrolebinding.rbac.authorization.k8s.io/tkg-connectivity-operator created
    clusterrole.rbac.authorization.k8s.io/tkg-connectivity-operator created
    configmap/tkg-connectivity-docker-image-config created
    
  10. Set the context of kubectl back to the shared services cluster on which you deployed Harbor.

    kubectl config use-context tkg-services-admin@tkg-services
    
  11. Annotate an HTTP proxy resource for Harbor with the IP address that you specified in registry.vip in the manifests/tanzu-registry/values.yaml.

    For example, if you set registry.bootstrapProxy to true and registry.vip to 1.2.3.4, run the following command:

    kubectl -n tanzu-system-registry annotate httpproxy harbor-httpproxy connectivity.tanzu.vmware.com/vip=1.2.3.4
    

    Export the HTTP proxy resource for the Harbor service.

    kubectl -n tanzu-system-registry label httpproxy harbor-httpproxy connectivity.tanzu.vmware.com/export=
    

    Setting these two annotations enables the creation of a service in the shared services cluster with this VIP as the load balancer IP address. Any newly created workload container images will be pulled through this channel, so you could potentially use a different IP in the annotation.

The Tanzu connectivity API and webhook are now running in your management cluster. The Tanzu Kubernetes cluster on which Harbor is running is now recognized as the shared services cluster for this Tanzu Kubernetes Grid instance. Connections to the registry's virtual IP address are proxied to the Harbor shared services cluster.

Connect to the Harbor User Interface

The Harbor UI is exposed via the Envoy service load balancer that is running in the Contour extension on the shared services cluster. To allow users to connect to the Harbor UI, you must map the address of the Envoy service load balancer to the hostname of the Harbor extension, for example harbor.system.tanzu. How you map the address of the Envoy service load balancer to the hostname depends on whether your Tanzu Kubernetes Grid instance is running on vSphere, or on Amazon EC2 or on Azure.

  1. Obtain the address of the Envoy service load balancer.

    kubectl get svc envoy -n tanzu-system-ingress -o jsonpath='{.status.loadBalancer.ingress[0]}'
    

    On vSphere, the Envoy service is exposed via NodePort instead of LoadBalancer, so the above output will be empty. You can use the IP address of any worker node in the shared services cluster instead. On Amazon EC2, it has a FQDN similar to a82ebae93a6fe42cd66d9e145e4fb292-1299077984.us-west-2.elb.amazonaws.com. On Azure, it will be an IP address similar to 20.54.226.44.

  2. Map the address of the Envoy service load balancer to the hostname of the Harbor extension.

    • vSphere: If you deployed Harbor on a shared services cluster that is running on vSphere, you must add an IP to the hostname mapping in /etc/hosts or add corresponding A records in your DNS server. For example, if the IP address is 10.93.9.100, add the following to /etc/hosts:

      10.93.9.100 harbor.system.tanzu notary.harbor.system.tanzu
      

      On Windows machines, the equivalent to /etc/hosts/ is C:\Windows\System32\Drivers\etc\hosts.

    • Amazon EC2 or Azure: If you deployed Harbor on a shared services cluster that is running on Amazon EC2 or Azure, you must create two DNS CNAME records (on Amazon EC2) or two DNS A records (on Azure) for the Harbor hostnames on a DNS server on the Internet.

      • One record for the Harbor hostname, for example, harbor.system.tanzu, that you configured in harbor-data-values.yaml, that points to the FQDN or IP of the Envoy service load balancer.
      • Another record for the Notary service that is running in Harbor, for example, notary.harbor.system.tanzu, that points to the FQDN or IP of the Envoy service load balancer.

You can now connect to the Harbor UI by going to https://harbor.system.tanzu in a Web browser and logging in as user admin with the harborAdminPassword that you configured in harbor-data-values.yaml.

Push and Pull Images to and from the Harbor Extension

Now that the Harbor extension is set up as a shared service, you can push images to it to make them available for your Tanzu Kubernetes clusters to pull.

  1. If the Harbor extension uses a self-signed certificate, download the Harbor CA certificate from https://harbor.system.tanzu/api/v2.0/systeminfo/getcert, and install it on your local machine, so docker can trust this CA certificate.

    • On Linux, save the certificate as /etc/docker/certs.d/harbor.system.tanzu/ca.crt.
    • On macOS, follow this procedure.
    • On Windows, right-click the certificate file and select Install Certificate.
  2. Log in to the Harbor registry with the user admin.

    docker login harbor.system.tanzu -u admin
    
  3. When prompted, enter the harborAdminPassword that you set when you deployed the Harbor extension on the shared services cluster.

  4. Tag an existing image that you have already pulled locally, for example nginx:1.7.9.

    docker tag nginx:1.7.9 harbor.system.tanzu/library/nginx:1.7.9
    
  5. Push the image to the Harbor registry.

    docker push harbor.system.tanzu/library/nginx:1.7.9
    
  6. Now you can pull the image from the Harbor shared service on any machine where the Harbor CA certificate is installed.

    docker pull harbor.system.tanzu/library/nginx:1.7.9
    

Push the Tanzu Kubernetes Grid Images to the Harbor Extension

The Tanzu Kubernetes Grid images are published in a public container registry and used by Tanzu Kubernetes Grid to deploy Tanzu Kubernetes clusters and extensions. When creating a Tanzu Kubernetes cluster, in order for Tanzu Kubernetes cluster nodes to pull Tanzu Kubernetes Grid images from the Harbor shared service rather than over the Internet, you must first push those images to the Harbor shared service.

This procedure is optional if your Tanzu Kubernetes clusters have internet connectivity to pull external images.

If you only want to store your application images rather than the Tanzu Kubernetes Grid images in the Harbor shared service, follow the procedure in Trust Custom CA Certificates on Cluster Nodes to enable the Tanzu Kubernetes cluster nodes to pull images from the Harbor shared service, and skip the rest of this procedure.

NOTE: If your Tanzu Kubernetes Grid instance is running in an Internet-restricted environment, you must perform these steps on a machine that has an Internet connection, that can also access the Harbor registry that you have just deployed as a shared service.

  1. Create a public project named tkg from the Harbor UI. You can also use a different project name.

  2. Set the FQDN of the Harbor registry that is running as a shared service as an environment variable.

    On Windows platforms, use the SET command instead of export. Include the name of the default project in the variable. For example, if you set the Harbor hostname to harbor.system.tanzu, set the following:

    export TKG_CUSTOM_IMAGE_REPOSITORY=harbor.system.tanzu/tkg
    
  3. Follow step 2 and step 3 in Deploying Tanzu Kubernetes Grid in an Internet-Restricted Environment to generate and run the publish-images.sh script.

  4. When the script finishes, add or update the following rows in the ~/.tkg/config.yaml file.

    These variables ensure that when creating a management cluster or Tanzu Kubernetes cluster, Tanzu Kubernetes Grid always pulls Tanzu Kubernetes Grid images from the Harbor extension that is running as a shared service, rather than from the external internet.

    TKG_CUSTOM_IMAGE_REPOSITORY: harbor.system.tanzu/tkg
    TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: false
    

    Because the tanzu connectivity webhook injects the Harbor CA certificate into cluster nodes, TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY should always be set to false when using Harbor as a shared service.

    If your Tanzu Kubernetes Grid instance is running in an Internet-restricted environment, you can disconnect the Internet connection now.

You can now use the tkg create cluster command to deploy Tanzu Kubernetes clusters, and the Tanzu Kubernetes Grid images will be pulled from the Harbor registry that is running in the shared services cluster. You can push images to the Harbor registry to make them available to all clusters that are running in the Tanzu Kubernetes Grid instance.

Connections between Tanzu Kubernetes cluster nodes and Harbor are secure, regardless of whether you use a trusted or a self-signed certificate for the Harbor shared service.

Update a Running Harbor Deployment

If you need to make changes to the configuration of the Harbor extension after deployment, follow these steps to update your deployed Harbor extension.

  1. Update the Harbor configuration in registry/harbor/harbor-data-values.yaml.

  2. Update the Kubernetes secret, which contains the Harbor configuration.

    This command assumes that you are running it from tkg-extensions-v1.2.0+vmware.1/extensions.

    kubectl create secret generic harbor-data-values --from-file=values.yaml=registry/harbor/harbor-data-values.yaml -n tanzu-system-registry -o yaml --dry-run | kubectl replace -f-
    

    Note that the final - on the kubectl replace command above is necessary to instruct kubectl to accept the input being piped to it from the kubectl create secret command.

    The Harbor extension will be reconciled using the new values you just added. The changes should show up in five minutes or less. This is handled by the Kapp controller, which synchronizes every five minutes.

check-circle-line exclamation-circle-line close-line
Scroll to top icon