Deploy Harbor into a Workload or a Shared Services Cluster

This topic explains how to deploy Harbor into a workload cluster or a shared services cluster in Tanzu Kubernetes Grid. The procedures apply to vSphere, Amazon Web Services (AWS), and Azure deployments.

Note: vSphere with Tanzu does not support deploying packages to a shared services cluster. Workload clusters deployed by vSphere with Tanzu can only use packaged services deployed to the workload clusters themselves.

Harbor

Harbor is an open-source, trusted, cloud-native container registry that stores, signs, and scans content. Harbor extends the open-source Docker distribution by adding the functionalities usually required by users such as security and identity control and management.

Tanzu Kubernetes Grid includes signed binaries for Harbor, which you can deploy into:

  • A workload cluster to provide container registry services for that clusters
  • A shared services cluster to provide container registry services for other workload clusters.

When deployed as a shared service, Harbor is available to all of the workload clusters in a given Tanzu Kubernetes Grid instance. To implement Harbor as a shared service, you deploy it into a special cluster that is reserved for running shared services in a Tanzu Kubernetes Grid instance. You can use the Harbor shared service as a private registry for images that you want to make available to all of the workload clusters that you deploy from a given management cluster. An advantage to using the Harbor shared service is that it is managed by Kubernetes, so it provides greater reliability than a stand-alone registry. Also, the Harbor implementation that Tanzu Kubernetes Grid provides as a shared service has been tested for use with Tanzu Kubernetes Grid and is fully supported.

Each Tanzu Kubernetes Grid instance can have only one shared services cluster.

Using the Harbor Shared Service in Internet-Restricted Environments

Another use-case for deploying Harbor as a shared service is for Tanzu Kubernetes Grid deployments in Internet-restricted environments. For more information, see Using the Harbor Shared Service in Internet-Restricted Environments.

Harbor Registry and ExternalDNS

VMware recommends installing ExternalDNS alongside the Harbor registry on infrastructures with load balancing (AWS, Azure, and vSphere with NSX Advanced Load Balancer), especially in production or other environments in which Harbor availability is important.

If the IP address to the ingress load balancer changes, ExternalDNS automatically picks up the change and re-maps the new address to the Harbor hostname. This precludes the need to manually re-map the address as described in Connect to the Harbor User Interface.

Prerequisites

  • You have installed the Tanzu CLI, kubectl, and the Carvel tools. For instructions, see Install the Tanzu CLI and Other Tools.
  • You have deployed a management cluster on vSphere, AWS, or Azure, in either an Internet-connected or Internet-restricted environment. If you are using Tanzu Kubernetes Grid in an Internet-restricted environment, you performed the procedure in Prepare an Internet-Restricted Environment before you deployed the management cluster.
  • You have installed yq v4.5 or later.

Prepare a Cluster for Harbor Deployment

To prepare a cluster for Harbor deployment:

  1. If you want to deploy harbor into a shared services cluster, create a shared services cluster if it is not already created. For instructions, see Create a Shared Services Cluster.

  2. Set the context of kubectl to the workload cluster or the shared services cluster. For example:

    kubectl config use-context tkg-services-admin@tkg-services
    
  3. Install the Cert Manager and Contour packages in the cluster. For instructions, see Implement Ingress Control with Contour.

  4. (Optional) Install the ExternalDNS package in the cluster. For instructions, see Implementing Service Discovery with ExternalDNS.

  5. Proceed to Deploy Harbor into a Cluster below.

Deploy Harbor into a Cluster

Follow this procedure to deploy Harbor into a workload cluster or a shared services cluster.

  1. Confirm that the Harbor package is available in the cluster:

    tanzu package available list -A
    
  2. Retrieve the version of the available package:

    tanzu package available list harbor.tanzu.vmware.com -A
    
  3. Generate a configuration file. This file configures the Harbor package.

    tanzu package available get harbor.tanzu.vmware.com/PACKAGE-VERSION --generate-default-values-file
    

    Where PACKAGE-VERSION is the version of the Harbor package that you want to install. The above command creates a configuration file named harbor-default-values.yaml containing default values. Note that in the previous versions, this file was called harbor-data-values.yaml.

    If you are installing Harbor to a workload cluster created by a vSphere Supervisor cluster, copy the code under harbor-default-values File for vSphere 7 on the configuration file.

    The resulting harbor-default-values.yaml file for v2.5.3 contains the following template:

      #@data/values
      #@overlay/match-child-defaults missing_ok=True
    
      ---
      #! The namespace to install Harbor
      namespace: tanzu-system-registry
    
      #! The FQDN for accessing Harbor admin UI and Registry service.
      hostname: harbor.yourdomain.com
      #! The network port of the Envoy service in Contour or other Ingress Controller.
      port:
        https: 443
    
      #! The log level of core, exporter, jobservice, registry. Its value is debug, info, warning, error or fatal.
      logLevel: info
    
      #! [Optional] The certificate for harbor FQDN if you want to use your own TLS certificate.
      #! We will issue the certificate by cert-manager when it's empty and the tlsCertificateSecretName is empty.
      tlsCertificate:
        #! [Required] the certificate
        tls.crt:
        #! [Required] the private key
        tls.key:
        #! [Optional] the certificate of CA, this enables the download
        #! link on portal to download the certificate of CA
        ca.crt:
    
      #! [Optional] The name of the secret for harbor FQDN if you want to use your own TLS certificate,
      #! which contains keys named:
      #! "tls.crt" - the certificate
      #! "tls.key" - the private key
      #! We will issue the certificate by cert-manager when it's empty and the tlsCertificate is empty.
      tlsCertificateSecretName:
    
      #! Use contour http proxy instead of the ingress when it's true
      enableContourHttpProxy: true
    
      #! [Required] The initial password of Harbor admin.
      harborAdminPassword:
    
      #! [Required] The secret key used for encryption. Must be a string of 16 chars.
      secretKey:
    
      database:
        #! [Required] The initial password of the postgres database.
        password:
        shmSizeLimit:
        maxIdleConns:
        maxOpenConns:
    
      exporter:
        cacheDuration:
    
      core:
        replicas: 1
        #! [Required] Secret is used when core server communicates with other components.
        secret:
        #! [Required] The XSRF key. Must be a string of 32 chars.
        xsrfKey:
      jobservice:
        replicas: 1
        #! [Required] Secret is used when job service communicates with other components.
        secret:
      registry:
        replicas: 1
        #! [Required] Secret is used to secure the upload state from client
        #! and registry storage backend.
        #! See: https://github.com/docker/distribution/blob/master/docs/configuration.md#http
        secret:
      notary:
        #! Whether to install Notary
        enabled: true
      trivy:
        #! enabled the flag to enable Trivy scanner
        enabled: true
        replicas: 1
        #! gitHubToken the GitHub access token to download Trivy DB
        gitHubToken: ""
        #! skipUpdate the flag to disable Trivy DB downloads from GitHub
        #
        #! You might want to set the value of this flag to `true` in test or CI/CD environments to avoid GitHub rate limiting issues.
        #! If the value is set to `true` you have to manually download the `trivy.db` file and mount it in the
        #! `/home/scanner/.cache/trivy/db/trivy.db` path.
        skipUpdate: false
    
      #! The persistence is always enabled and a default StorageClass
      #! is needed in the k8s cluster to provision volumes dynamically.
      #! Specify another StorageClass in the "storageClass" or set "existingClaim"
      #! if you have already existing persistent volumes to use
      #
      #! For storing images and charts, you can also use "azure", "gcs", "s3",
      #! "swift" or "oss". Set it in the "imageChartStorage" section
      persistence:
        persistentVolumeClaim:
          registry:
            #! Use the existing PVC which must be created manually before bound,
            #! and specify the "subPath" if the PVC is shared with other components
            existingClaim: ""
            #! Specify the "storageClass" used to provision the volume. Or the default
            #! StorageClass will be used(the default).
            #! Set it to "-" to disable dynamic provisioning
            storageClass: ""
            subPath: ""
            accessMode: ReadWriteOnce
            size: 10Gi
          jobservice:
            existingClaim: ""
            storageClass: ""
            subPath: ""
            accessMode: ReadWriteOnce
            size: 1Gi
          database:
            existingClaim: ""
            storageClass: ""
            subPath: ""
            accessMode: ReadWriteOnce
            size: 1Gi
          redis:
            existingClaim: ""
            storageClass: ""
            subPath: ""
            accessMode: ReadWriteOnce
            size: 1Gi
          trivy:
            existingClaim: ""
            storageClass: ""
            subPath: ""
            accessMode: ReadWriteOnce
            size: 5Gi
        #! Define which storage backend is used for registry to store
        #! images and charts. Refer to
        #! https://github.com/docker/distribution/blob/master/docs/configuration.md#storage
        #! for the detail.
        imageChartStorage:
          #! Specify whether to disable `redirect` for images and chart storage, for
          #! backends which not supported it (such as using minio for `s3` storage type), please disable
          #! it. To disable redirects, simply set `disableredirect` to `true` instead.
          #! Refer to
          #! https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect
          #! for the detail.
          disableredirect: false
          #! Specify the "caBundleSecretName" if the storage service uses a self-signed certificate.
          #! The secret must contain keys named "ca.crt" which will be injected into the trust store
          #! of registrys containers.
          #! caBundleSecretName:
    
          #! Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift",
          #! "oss" and fill the information needed in the corresponding section. The type
          #! must be "filesystem" if you want to use persistent volumes for registry
          type: filesystem
          filesystem:
            rootdirectory: /storage
            #maxthreads: 100
          azure:
            accountname: accountname #! required
            accountkey: base64encodedaccountkey #! required
            container: containername #! required
            realm: core.windows.net #! optional
          gcs:
            bucket: bucketname #! required
            #! The base64 encoded json file which contains the key
            encodedkey: base64-encoded-json-key-file #! optional
            rootdirectory: null #! optional
            chunksize: 5242880 #! optional
          s3:
            region: us-west-1 #! required
            bucket: bucketname #! required
            accesskey: null #! eg, awsaccesskey
            secretkey: null #! eg, awssecretkey
            regionendpoint: null #! optional, eg, http://myobjects.local
            encrypt: false #! optional
            keyid: null #! eg, mykeyid
            secure: true #! optional
            skipverify: false #! optional
            v4auth: true #! optional
            chunksize: null #! optional
            rootdirectory: null #! optional
            storageclass: STANDARD #! optional
            multipartcopychunksize: null #! optional
            multipartcopymaxconcurrency: null #! optional
            multipartcopythresholdsize: null #! optional
          swift:
            authurl: https://storage.myprovider.com/v3/auth
            username: username
            password: password
            container: containername
            region: null #! eg, fr
            tenant: null #! eg, tenantname
            tenantid: null #! eg, tenantid
            domain: null #! eg, domainname
            domainid: null #! eg, domainid
            trustid: null #! eg, trustid
            insecureskipverify: null #! bool eg, false
            chunksize: null #! eg, 5M
            prefix: null #! eg
            secretkey: null #! eg, secretkey
            accesskey: null #! eg, accesskey
            authversion: null #! eg, 3
            endpointtype: null #! eg, public
            tempurlcontainerkey: null #! eg, false
            tempurlmethods: null #! eg
          oss:
            accesskeyid: accesskeyid
            accesskeysecret: accesskeysecret
            region: regionname
            bucket: bucketname
            endpoint: null #! eg, endpoint
            internal: null #! eg, false
            encrypt: null #! eg, false
            secure: null #! eg, true
            chunksize: null #! eg, 10M
            rootdirectory: null #! eg, rootdirectory
    
      #! The http/https network proxy for core, exporter, jobservice, trivy
      proxy:
        httpProxy:
        httpsProxy:
        noProxy: 127.0.0.1,localhost,.local,.internal
    
      #! The PSP names used by Harbor pods. The names are separated by ','. 'null' means all PSP can be used.
      pspNames: null
    
      #! The metrics used by core, registry and exporter
      metrics:
        enabled: false
        core:
          path: /metrics
          port: 8001
        registry:
          path: /metrics
          port: 8001
        jobservice:
          path: /metrics
          port: 8001
        exporter:
          path: /metrics
          port: 8001
    
      network:
        ipFamilies: ["IPv4","IPv6"]
    
    

    After creating the configuration file, do not remove the /tmp/harbor-package-PACKAGE-VERSION directory if you intend to generate random passwords and secrets for Harbor. See the step below.

    1. Set the mandatory passwords and secrets in the harbor-default-values.yaml file by doing one of the following:

      • To automatically generate random passwords and secrets, run:

        # Skip this command if the /tmp/harbor-package-PACKAGE-VERSION directory is already present on your machine.
        image_url=$(kubectl -n tanzu-package-repo-global get packages harbor.tanzu.vmware.com/PACKAGE-VERSION -o jsonpath='{.spec.template.spec.fetch[0].imgpkgBundle.image}')
        # Skip this command if the /tmp/harbor-package-PACKAGE-VERSION directory is already present on your machine.
        imgpkg pull -b $image_url -o /tmp/harbor-package-PACKAGE-VERSION
        bash /tmp/harbor-package-PACKAGE-VERSION/config/scripts/generate-passwords.sh harbor-default-values.yaml
        

        Where PACKAGE-VERSION is the version of the Harbor package that you want to install.

        For example, for the Harbor package v2.5.3, run:

        image_url=$(kubectl -n tanzu-package-repo-global get packages harbor.tanzu.vmware.com/2.5.3+vmware.1-tkg.1 -o jsonpath='{.spec.template.spec.fetch[0].imgpkgBundle.image}')
        imgpkg pull -b $image_url -o /tmp/harbor-package-2.5.3
        bash /tmp/harbor-package-2.5.3/config/scripts/generate-passwords.sh harbor-default-values.yaml
        
      • To set your own passwords and secrets, update the following entries in the harbor-default-values.yaml file:

        • harborAdminPassword
        • secretKey
        • database.password
        • core.secret
        • core.xsrfKey
        • jobservice.secret
        • registry.secret
    2. Specify other settings in the harbor-default-values.yaml file.

      • Set the hostname setting to the hostname you want to use to access Harbor. For example, harbor.yourdomain.com.
      • To use your own certificates, update the tls.crt, tls.key, and ca.crt settings with the contents of your certificate, key, and CA certificate. The certificate can be signed by a trusted authority or be self-signed. If you leave these blank, Tanzu Kubernetes Grid automatically generates a self-signed certificate.
      • If you used the generate-passwords.sh script, optionally update the harborAdminPassword with something that is easier to remember.
      • If you are installing Harbor to a workload cluster created by using vSphere with Tanzu, non-empty values are required for the following:
        • storageClass: Under persistence.persistentVolumeClaim, for registry, jobservice, database, redis, and trivy set storageClass to a storage profile returned by kubectl get sc.
        • pspNames: Set pspNames to PSP values returned by kubectl get psp, for example "vmware-system-restricted,vmware-system-privileged".
      • Optionally update other persistence settings to specify how Harbor stores data.

        If you need to store a large quantity of container images in Harbor, set persistence.persistentVolumeClaim.registry.size to a larger number.

        If you do not update the storageClass under persistence settings, Harbor uses the cluster’s default storageClass. If the default storageClass or a storageClass that you specify in harbor-default-values.yaml supports the accessMode ReadWriteMany, you must update the persistence.persistentVolumeClaim accessMode settings for registry, jobservice, database, redis, and trivy from ReadWriteOnce to ReadWriteMany. vSphere 7 with VMware vSAN 7 supports accessMode: ReadWriteMany but vSphere 6.7u3 does not. If you are using vSphere 7 without vSAN or you are using vSphere 6.7u3, use the default value ReadWriteOnce.

      To see more information about the values in the harbor-default-values.yaml file, run the below command against your target cluster:

      tanzu package available get harbor.tanzu.vmware.com/AVAILABLE-VERSION --values-schema
      

      Where AVAILABLE-VERSION is the version of the Harbor package. The --values-schema flag retrieves the valuesSchema section from the Package API resource for the Harbor package. You can set the output format, --output, for the values schema to yaml, json, or table. For more information, see Packages in Installing and Managing Packages with the Tanzu CLI.

      For example:

      tanzu package available get harbor.tanzu.vmware.com/2.5.3+vmware.1-tkg.1 --values-schema
      
    3. Remove all comments in the harbor-default-values.yaml file:

      yq -i eval '... comments=""' harbor-default-values.yaml
      
  4. Install the package:

    • If the target namespace exists in the cluster, run:

      tanzu package install harbor \
      --package-name harbor.tanzu.vmware.com \
      --version AVAILABLE-PACKAGE-VERSION \
      --values-file harbor-default-values.yaml \
      --namespace TARGET-NAMESPACE
      

      Where:

      • TARGET-NAMESPACE is the namespace in which you want to install the Harbor package, Harbor package app, and any other Kubernetes resources that describe the package. For example, the my-packages or tanzu-cli-managed-packages namespace. If the --namespace flag is not specified, the Tanzu CLI installs the package and its resources in the default namespace. The Harbor pods and any other resources associated with the Harbor component are created in the tanzu-system-registry namespace; do not install the Harbor package into this namespace.
      • AVAILABLE-PACKAGE-VERSION is the version that you retrieved above.

      For example:

      tanzu package install harbor \
      --package-name harbor.tanzu.vmware.com \
      --version 2.5.3+vmware.1-tkg.1 \
      --values-file harbor-default-values.yaml \
      --namespace my-packages
      
    • If the target namespace does not exist in the cluster, run:

      tanzu package install harbor \
      --package-name harbor.tanzu.vmware.com \
      --version AVAILABLE-PACKAGE-VERSION \
      --values-file harbor-default-values.yaml \
      --namespace TARGET-NAMESPACE
      --create-namespace
      

      Where:

      • TARGET-NAMESPACE is the namespace in which you want to install the Harbor package, Harbor package app, and any other Kubernetes resources that describe the package. For example, the my-packages or tanzu-cli-managed-packages namespace. If the --namespace flag is not specified, the Tanzu CLI installs the package and its resources in the default namespace. The Harbor pods and any other resources associated with the Harbor component are created in the tanzu-system-registry namespace; do not install the Harbor package into this namespace.
      • AVAILABLE-PACKAGE-VERSION is the version that you retrieved above.

      For example:

      tanzu package install harbor \
      --package-name harbor.tanzu.vmware.com \
      --version 2.5.3+vmware.1-tkg.1 \
      --values-file harbor-default-values.yaml \
      --namespace my-packages
      --create-namespace
      

    Alternatively, you can create the namespace before installing the package by running the kubectl create namespace TARGET-NAMESPACE command.

  5. If you are installing Harbor to a workload cluster created by using vSphere with Tanzu, patch the Harbor package with another overlay as follows, to create initContainers objects that can set directory ownership and permissions:

    1. Create a file fix-fsgroup-overlay.yaml containing the Harbor FSGroup Overlay code below.

    2. Create a generic secret with the overlay:

      kubectl -n my-packages create secret generic harbor-database-redis-trivy-jobservice-registry-image-overlay -o yaml --dry-run=client --from-file=fix-fsgroup-overlay.yaml | kubectl apply -f -
      
    3. Patch the Harbor package with the secret:

      kubectl -n my-packages annotate packageinstalls harbor ext.packaging.carvel.dev/ytt-paths-from-secret-name.1=harbor-database-redis-trivy-jobservice-registry-image-overlay
      
    4. Delete the existing Harbor pods so that they re-create:

      kubectl delete pods --all -n my-packages
      
  6. Confirm that the harbor package has been installed:

    tanzu package installed list -A
    

    For example:

    tanzu package installed list -A
    - Retrieving installed packages...
      NAME            PACKAGE-NAME                     PACKAGE-VERSION                   STATUS               NAMESPACE
      cert-manager    cert-manager.tanzu.vmware.com    1.1.0+vmware.1-tkg.2              Reconcile succeeded  my-packages
      contour         contour.tanzu.vmware.com         1.17.1+vmware.1-tkg.1             Reconcile succeeded  my-packages
      harbor          harbor.tanzu.vmware.com          2.5.3+vmware.1-tkg.1              Reconcile succeeded  my-packages
      antrea          antrea.tanzu.vmware.com                                            Reconcile succeeded  tkg-system
      [...]
    

    To see more details about the package, you can also run:

    tanzu package installed get harbor --namespace PACKAGE-NAMESPACE
    

    Where PACKAGE-NAMESPACE is the namespace in which the harbor package is installed.

    For example:

    tanzu package installed get harbor --namespace my-packages
    \ Retrieving installation details for harbor...
    NAME:                    harbor
    PACKAGE-NAME:            harbor.tanzu.vmware.com
    PACKAGE-VERSION:         2.5.3+vmware.1-tkg.1
    STATUS:                  Reconcile succeeded
    CONDITIONS:              [{ReconcileSucceeded True  }]
    USEFUL-ERROR-MESSAGE:
    
  7. Confirm that the harbor app has been successfully reconciled in your PACKAGE-NAMESPACE:

    kubectl get apps -A
    

    For example:

    NAMESPACE     NAME             DESCRIPTION           SINCE-DEPLOY   AGE
    my-packages   cert-manager     Reconcile succeeded   78s            3h5m
    my-packages   contour          Reconcile succeeded   57s            6m3s
    my-packages   harbor           Reconcile succeeded   40s            24m
    tkg-system    antrea           Reconcile succeeded   45s            3h18m
    [...]
    

    If the status is not Reconcile Succeeded, view the full status details of the harbor app. Viewing the full status can help you troubleshoot the problem.

    kubectl get app harbor --namespace PACKAGE-NAMESPACE -o yaml
    

    Where PACKAGE-NAMESPACE is the namespace in which you installed the package. If troubleshooting does not help you solve the problem, you must uninstall the package before installing it again:

    tanzu package installed delete harbor --namespace PACKAGE-NAMESPACE
    
  8. Confirm that the Harbor services are running by listing all of the pods in the cluster:

    kubectl get pods -A
    

    In the tanzu-system-regisry namespace, you should see the harbor core, database, jobservice, notary, portal, redis, registry, and trivy services running in a pod with names similar to the following:

    NAMESPACE               NAME                                    READY   STATUS    RESTARTS   AGE
    [...]
    tanzu-system-ingress    contour-6b568c9b88-h5s2r                1/1     Running   0          26m
    tanzu-system-ingress    contour-6b568c9b88-mlg2r                1/1     Running   0          26m
    tanzu-system-ingress    envoy-wfqdp                             2/2     Running   0          26m
    tanzu-system-registry   harbor-core-557b58b65c-4kzhn            1/1     Running   0          23m
    tanzu-system-registry   harbor-database-0                       1/1     Running   0          23m
    tanzu-system-registry   harbor-jobservice-847b5c8756-t6kfs      1/1     Running   0          23m
    tanzu-system-registry   harbor-notary-server-6b74b8dd56-d7swb   1/1     Running   2          23m
    tanzu-system-registry   harbor-notary-signer-69d4669884-dglzm   1/1     Running   2          23m
    tanzu-system-registry   harbor-portal-8f677757c-t4cbj           1/1     Running   0          23m
    tanzu-system-registry   harbor-redis-0                          1/1     Running   0          23m
    tanzu-system-registry   harbor-registry-85b96c7777-wsdnj        2/2     Running   0          23m
    tanzu-system-registry   harbor-trivy-0                          1/1     Running   0          23m
    [...]
    
  9. Obtain the Harbor CA certificate from the harbor-tls secret in the tanzu-system-registry namespace:

    kubectl -n tanzu-system-registry get secret harbor-tls -o=jsonpath="{.data.ca\.crt}" | base64 -d
    

    Make a copy of the output.

Connect to the Harbor User Interface

The Harbor UI is exposed via the Envoy service load balancer that is running in the tanzu-system-ingress namespace in the cluster. To allow users to connect to the Harbor UI, you must map the address of the Envoy service load balancer to the hostname of the Harbor service, for example, harbor.yourdomain.com. How you map the address of the Envoy service load balancer to the hostname depends on whether your Tanzu Kubernetes Grid instance is running on vSphere, on AWS, or on Azure.

  1. Obtain the address of the Envoy service load balancer.

    kubectl get svc envoy -n tanzu-system-ingress -o jsonpath='{.status.loadBalancer.ingress[0]}'
    

    On vSphere without NSX Advanced Load Balancer (ALB), the Envoy service is exposed via NodePort instead of LoadBalancer, so the above output will be empty, and you can use the IP address of any worker node in the cluster instead. On AWS, it has a FQDN similar to a82ebae93a6fe42cd66d9e145e4fb292-1299077984.us-west-2.elb.amazonaws.com. On vSphere with NSX ALB and Azure, the Envoy service has a Load Balancer IP address similar to 20.54.226.44.

  2. Map the address of the Envoy service load balancer to the hostname of the Harbor service.

    • vSphere: If you deployed Harbor on a cluster that is running on vSphere, you must add an IP to hostname mapping in /etc/hosts or add corresponding A records in your DNS server. For example, if the IP address is 10.93.9.100, add the following to /etc/hosts:

      10.93.9.100 harbor.yourdomain.com notary.harbor.yourdomain.com
      

      On Windows machines, the equivalent to /etc/hosts/ is C:\Windows\System32\Drivers\etc\hosts.

    • AWS or Azure: If you deployed Harbor on a cluster that is running on AWS or Azure, you must create two DNS CNAME records (on AWS) or two DNS A records (on Azure) for the Harbor hostnames on a DNS server on the Internet.

      • One record for the Harbor hostname that you configured in harbor-default-values.yaml (for example, harbor.yourdomain.com) that points to the FQDN or IP of the Envoy service load balancer.
      • Another record for the Notary service that is running in Harbor (for example, notary.harbor.yourdomain.com) that points to the FQDN or IP of the Envoy service load balancer.

Users can now connect to the Harbor UI by navigating to https://harbor.yourdomain.com in a Web browser and log in as user admin with the harborAdminPassword that you configured in harbor-default-values.yaml.

Push and Pull Images to and from the Harbor Package

Now that Harbor is set up, you can push images to it to make them available for your workload clusters to pull.

  1. If Harbor uses a self-signed certificate, download the Harbor CA certificate from https://harbor.yourdomain.com/api/v2.0/systeminfo/getcert and install it on your local machine, so Docker can trust this CA certificate.

    • On Linux, save the certificate as /etc/docker/certs.d/harbor.yourdomain.com/ca.crt.
    • On macOS, follow this procedure.
    • On Windows, right-click the certificate file and select Install Certificate.
  2. Log in to the Harbor registry with the user admin. When prompted, enter the harborAdminPassword that you set when you installed the Harbor package in the cluster.

    docker login harbor.yourdomain.com -u admin
    
  3. Tag an existing image that you have already pulled locally, for example, nginx:1.7.9.

    docker tag nginx:1.7.9 harbor.yourdomain.com/library/nginx:1.7.9
    
  4. Push the image to the Harbor registry.

    docker push harbor.yourdomain.com/library/nginx:1.7.9
    
  5. Now you can pull the image from the Harbor registry on any machine where the Harbor CA certificate is installed.

    docker pull harbor.yourdomain.com/library/nginx:1.7.9
    

Push the Tanzu Kubernetes Grid Images into the Harbor Registry

The Tanzu Kubernetes Grid images are published in a public container registry used by Tanzu Kubernetes Grid to deploy workload clusters and packages. When creating a workload cluster, in order for workload cluster nodes to pull Tanzu Kubernetes Grid images from Harbor rather than over the Internet, you must first push those images to the Harbor.

This procedure is optional if workload clusters have internet connectivity to pull external images.

Connections between workload cluster nodes and Harbor are secure, regardless of whether you use a trusted or a self-signed certificate for the Harbor registry.

Note: If your Tanzu Kubernetes Grid instance is running in an Internet-restricted environment, you must perform these steps on a machine that has an Internet connection and can access the Harbor registry that you have just deployed.

Push Application Images to Harbor

To store application images from workload clusters in Harbor, do one of the following:

Push Tanzu Kubernetes Grid Images to Harbor

To push Tanzu Kubernetes Grid component images to a Harbor registry:

  1. Create a public project named tkg from Harbor UI. Or, you can use another project name.

  2. Set the FQDN of the Harbor registry as an environment variable.

    On Windows platforms, use the SET command instead of export. Include the name of the default project in the variable. For example, if you set the Harbor hostname to harbor.yourdomain.com, set the following:

    export TKG_CUSTOM_IMAGE_REPOSITORY=harbor.yourdomain.com/tkg
    
  3. Follow step 2 and step 3 in Prepare an Internet-Restricted Environment to generate an image list and run the download-images.sh script.

  4. When the script finishes, run tanzu config set to set the following variables in the global Tanzu CLI configuration file, ~/.config/tanzu/config.yaml. These variables ensure that when creating a management cluster or workload cluster, Tanzu Kubernetes Grid always pulls Tanzu Kubernetes Grid images from Harbor, rather than from the external internet.

    tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY harbor.yourdomain.com/tkg
    

    If Harbor uses self-signed certificates, also set TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE. Provide the CA certificate in base64-encoded format. Because the Tanzu connectivity webhook injects the Harbor CA certificate into cluster nodes, TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY should always be set to false when using Harbor as a shared service.

    tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE LS0t[...]tLS0tLQ==
    tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY false
    

    If your Tanzu Kubernetes Grid instance is running in an Internet-restricted environment, you can disconnect the Internet connection now.

You can now use the tanzu cluster create command to deploy workload clusters, and the images will be pulled from Harbor that is running in the cluster. You can push images to the Harbor registry to make them available to all clusters that are running in the Tanzu Kubernetes Grid instance.

Update a Running Harbor Deployment

If you need to make changes to the configuration of the Harbor package after deployment, follow these steps to update your deployed Harbor package.

  1. To address a known issue when upgrading Tanzu Kubernetes Grid v1.3 to v1.4, the Harbor package may have been patched with an overlay as described in the Knowledge Base article The harbor-notary-signer pod fails to start…. If the overlay annotation is present, remove it:

    1. To check for the overlay annotation, retrieve the PackageInstall object:

      kubectl -n INSTALLED-PACKAGE-NAMESPACE get packageinstall harbor -o yaml
      

      Where INSTALLED-PACKAGE-NAMESPACE is namespace in which the Harbor package is installed.

    2. If the output shows metadata.annotations set to ext.packaging.carvel.dev/ytt-paths-from-secret-name.0: harbor-notary-singer-image-overlay, remove the annotation:

      kubectl -n INSTALLED-PACKAGE-NAMESPACE annotate packageinstalls harbor ext.packaging.carvel.dev/ytt-paths-from-secret-name.0-
      
  2. Update the Harbor configuration in harbor-default-values.yaml. For example, you can increase the amount of registry storage by updating the persistence.persistentVolumeClaim.registry.size value.

  3. Update the configuration of the installed package:

    tanzu package installed update harbor \
    --version INSTALLED-PACKAGE-VERSION \
    --values-file harbor-default-values.yaml \
    --namespace INSTALLED-PACKAGE-NAMESPACE
    

    Where:

    • INSTALLED-PACKAGE-VERSION is the version of the installed Harbor package.
    • INSTALLED-PACKAGE-NAMESPACE is the namespace in which the Harbor package is installed.

    For example:

    tanzu package installed update harbor \
    --version 2.5.3+vmware.1-tkg.1 \
    --values-file harbor-default-values.yaml \
    --namespace my-packages
    

The Harbor package will be reconciled using the new value or values that you added. It can take up to five minutes for kapp-controller to apply the changes.

For more information about the tanzu package installed update command, see Update a Package in Installing and Managing Packages with the Tanzu CLI. You can use this command to update the version or the configuration of an installed package.

harbor-default-values File for vSphere 7

harbor-default-values.yaml:

namespace: harbor
hostname: harbor.192.168.112.3.nip.io
port:
  https: 443
logLevel: info
tlsCertificate:
  tls.crt:
  tls.key:
  ca.crt:
enableContourHttpProxy: true
harborAdminPassword: GUGzAXrWKNni2eqP
secretKey: M2blh8iIZIWL9Kvv
database:
  password: 0ZuG0STKmHYAs3aE
core:
  replicas: 1
  secret: jML4MAuLIEgOFmq8
  xsrfKey: ZcEYnGlUSBJvP7jrf9zKuZh4OFoNfIUQ
jobservice:
  replicas: 1
  secret: 6TW3Ylpg8LpYjixp
registry:
  replicas: 1
  secret: Ee48fjRlkMm4n2l6
notary:
  enabled: true
trivy:
  enabled: true
  replicas: 1
  gitHubToken: ""
  skipUpdate: false
persistence:
  persistentVolumeClaim:
    registry:
      existingClaim: ""
      storageClass: "wcpglobalstorageprofile"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 10Gi
    jobservice:
      existingClaim: ""
      storageClass: "wcpglobalstorageprofile"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
    database:
      existingClaim: ""
      storageClass: "wcpglobalstorageprofile"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
    redis:
      existingClaim: ""
      storageClass: "wcpglobalstorageprofile"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
    trivy:
      existingClaim: ""
      storageClass: "wcpglobalstorageprofile"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
  imageChartStorage:
    disableredirect: false
    type: filesystem
    filesystem:
      rootdirectory: /storage
    azure:
      accountname: accountname
      accountkey: base64encodedaccountkey
      container: containername
      realm: core.windows.net
    gcs:
      bucket: bucketname
      encodedkey: base64-encoded-json-key-file
      rootdirectory: null
      chunksize: 5242880
    s3:
      region: us-west-1
      bucket: bucketname
      accesskey: null
      secretkey: null
      regionendpoint: null
      encrypt: false
      keyid: null
      secure: true
      v4auth: true
      chunksize: null
      rootdirectory: null
      storageclass: STANDARD
    swift:
      authurl: https://storage.myprovider.com/v3/auth
      username: username
      password: password
      container: containername
      region: null
      tenant: null
      tenantid: null
      domain: null
      domainid: null
      trustid: null
      insecureskipverify: null
      chunksize: null
      prefix: null
      secretkey: null
      accesskey: null
      authversion: null
      endpointtype: null
      tempurlcontainerkey: null
      tempurlmethods: null
    oss:
      accesskeyid: accesskeyid
      accesskeysecret: accesskeysecret
      region: regionname
      bucket: bucketname
      endpoint: null
      internal: null
      encrypt: null
      secure: null
      chunksize: null
      rootdirectory: null
proxy:
  httpProxy:
  httpsProxy:
  noProxy: 127.0.0.1,localhost,.local,.internal
pspNames: vmware-system-restricted,vmware-system-privileged
metrics:
  enabled: false
  core:
    path: /metrics
    port: 8001
  registry:
    path: /metrics
    port: 8001
  exporter:
    path: /metrics
    port: 8001

Harbor FSGroup Overlay

fix-fsgroup-overlay.yaml:

#@ load("@ytt:overlay", "overlay")

#@overlay/match by=overlay.and_op(overlay.subset({"kind": "StatefulSet"}), overlay.subset({"metadata": {"name": "harbor-database"}}))
---
spec:
  template:
    spec:
      initContainers:
        #@overlay/match by=overlay.index(0)
        #@overlay/insert before=True
        - name: "data-ownership-ensurer"
          securityContext:
            runAsUser: 0
          image: projects.registry.vmware.com/tkg/harbor/harbor-db@sha256:26ce0071b528944fd33080f273d0812da479da557eee2727409bd4162719deff
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh"]
          args: ["-c", "chown -R postgres:postgres /var/lib/postgresql/data || true"]
          volumeMounts:
            - name: database-data
              mountPath: /var/lib/postgresql/data
              subPath:

#@overlay/match by=overlay.and_op(overlay.subset({"kind": "StatefulSet"}), overlay.subset({"metadata": {"name": "harbor-redis"}}))
---
spec:
  template:
    spec:
      #@overlay/match missing_ok=True
      initContainers:
        - name: "redis-ownership-ensurer"
          securityContext:
            runAsUser: 0
          image: projects.registry.vmware.com/tkg/harbor/redis-photon@sha256:5b55e6d2b2da4d8f1eca413c7d79bebfed64fb69db891b80bc4a28f733f1c85e
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh"]
          args: ["-c", "chown -R 999:999 /var/lib/redis || true"]
          volumeMounts:
            - name: data
              mountPath: /var/lib/redis
              subPath:
              readOnly: false

#@overlay/match by=overlay.and_op(overlay.subset({"kind": "StatefulSet"}), overlay.subset({"metadata": {"name": "harbor-trivy"}}))
---
spec:
  template:
    spec:
      #@overlay/match missing_ok=True
      initContainers:
        - name: "trivy-ownership-ensurer"
          securityContext:
            runAsUser: 0
          image: projects.registry.vmware.com/tkg/harbor/trivy-adapter-photon@sha256:722bcbe039a3d83bc4cc1d78de1cf533bd38b829d494af288622fa956ca648f8
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh"]
          args: ["-c", "chown -R 10000:10000 /home/scanner/.cache || true"]
          volumeMounts:
            - name: data
              mountPath: /home/scanner/.cache
              subPath:
              readOnly: false

#@overlay/match by=overlay.and_op(overlay.subset({"kind": "Deployment"}), overlay.subset({"metadata": {"name": "harbor-jobservice"}}))
---
spec:
  template:
    spec:
      #@overlay/match missing_ok=True
      initContainers:
        - name: "jobservice-ownership-ensurer"
          securityContext:
            runAsUser: 0
          image: projects.registry.vmware.com/tkg/harbor/harbor-jobservice@sha256:1ab9315d6832320413f0ff48b414c26cbcf3beec9a6ccc13a74e07ecffe2a8e0
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh"]
          args: ["-c", "chown -R 10000:10000 /var/log/jobs || true"]
          volumeMounts:
            - name: job-logs
              mountPath: /var/log/jobs
              subPath:
              readOnly: false

#@overlay/match by=overlay.and_op(overlay.subset({"kind": "Deployment"}), overlay.subset({"metadata": {"name": "harbor-registry"}}))
---
spec:
  template:
    spec:
      #@overlay/match missing_ok=True
      initContainers:
        - name: "registry-ownership-ensurer"
          securityContext:
            runAsUser: 0
          image: projects.registry.vmware.com/tkg/harbor/harbor-registryctl@sha256:aa7a6547a46b2b0222c7187567e6f85ffbd853611038bf1b0c33a8356d863108
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh"]
          args: ["-c", "chown -R 10000:10000 /storage || true"]
          volumeMounts:
            - name: registry-data
              mountPath: /storage
              subPath:
              readOnly: false
check-circle-line exclamation-circle-line close-line
Scroll to top icon