Harbor is an open source, trusted, cloud native container registry that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity control and management.
Tanzu Kubernetes Grid includes signed binaries for Harbor, that you can deploy on a shared services cluster to provide container registry services for other Tanzu Kubernetes clusters. Unlike Tanzu Kubernetes Grid extensions, which you use to deploy services on individual clusters, you deploy Harbor as a shared service. In this way, Harbor is available to all of the Tanzu Kubernetes clusters in a given Tanzu Kubernetes Grid instance. To implement Harbor as a shared service, you deploy it on a special cluster that is reserved for running shared services in a Tanzu Kubernetes Grid instance.
You can use the Harbor shared service as a private registry for images that you want to make available to all of the Tanzu Kubernetes clusters that you deploy from a given management cluster. An advantage to using the Harbor shared service is that it is managed by Kubernetes, so it provides greater reliability than a standalone registry. Also, the Harbor implementation that Tanzu Kubernetes Grid provides as a shared service has been tested for use with Tanzu Kubernetes Grid and is fully supported.
The procedures in this topic all apply to vSphere, Amazon EC2, and Azure deployments.
Another use-case for deploying Harbor as a shared service is for Tanzu Kubernetes Grid deployments in Internet-restricted environments. To perform the procedure in Deploy Tanzu Kubernetes Grid to an Offline Environment, a private container registry must be running in your environment before you can deploy a management cluster. This private registry might or might not be an open-source Harbor instance. This private registry is not a Tanzu Kubernetes Grid shared service, but rather is a central registry that is available to your whole environment. This central registry is part of your own infrastructure and is not supported by VMware.
After you use this central registry to deploy a management cluster in an Internet-restricted environment, you configure Tanzu Kubernetes Grid so that Tanzu Kubernetes clusters pull images from the central registry rather than from the external Internet. If the central registry uses a trusted CA certificate, connections between Tanzu Kubernetes clusters and the registry are secure. However, if your central registry uses self-signed certificates, what to do depends on which version of Tanzu Kubernetes Grid you are using:
TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY
on the registry. The first option is very complex and time-consuming. In the second option, the connection between containerd
and the registry is insecure, so it is inappropriate for production environments.TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY
and specify the TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE
option. Setting this option automatically injects your self-signed certificates into your Tanzu Kubernetes clusters.In either case, if your central registry uses self-signed certificates, after you use it to deploy a management cluster in an Internet-restricted environment, the recommended approach is to deploy the Harbor shared service in your Tanzu Kubernetes Grid instance. You can then configure Tanzu Kubernetes Grid so that Tanzu Kubernetes clusters pull images from the Harbor shared service rather than from the central registry. Tanzu Kubernetes Grid shared services implement the Tanzu connectivity API within your instance. In addition to managing connections between clusters and the shared service, the Tanzu connectivity API also automatically injects the required certificates into all cluster nodes when you deploy them. So, by deploying Harbor as a shared service in your Tanzu Kubernetes Grid instance, connections between clusters and the Harbor registry are secure, without you having to manually inject certificates into every node or to set the TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE
option.
You have deployed a management cluster on vSphere, Amazon EC2, or Azure, in either an Internet-connected or Internet-restricted environment.
If you are using Tanzu Kubernetes Grid in an Internet-restricted environment, you performed the procedure in Deploy Tanzu Kubernetes Grid to an Offline Environment before you deployed the management cluster.
Each Tanzu Kubernetes Grid instance can only have one shared services cluster. You must deploy Harbor on a cluster that will only be used for shared services.
To deploy Harbor on a shared services cluster, you must update the configuration file that is used when you deploy Harbor.
Deploy a Tanzu Kubernetes cluster to use for shared services.
For example, deploy a cluster named tkg-services
. Because the cluster will provide services to all of the other clusters in the instance, it is recommended to use the prod
cluster plan rather than the dev
plan.
vSphere:
tkg create cluster tkg-services --plan prod --vsphere-controlplane-endpoint <WORKLOAD_CLUSTER_IP_OR_FQDN>
Replace <WORKLOAD_CLUSTER_IP_OR_FQDN>
with a static virtual IP (VIP) address for the control plane of the the shared services cluster. Ensure that this IP address is not in the DHCP range, but is in the same subnet as the DHCP range. If you mapped a fully qualified domain name (FQDN) to the VIP address, you can specify the FQDN instead of the VIP address.
Amazon EC2 and Azure:
tkg create cluster tkg-services --plan prod
Throughout the rest of these procedures, the cluster that you just deployed is referred to as the shared services cluster.
In a terminal, navigate to the folder that contains the unpacked Tanzu Kubernetes Grid extension manifest files, tkg-extensions-v1.2.0+vmware.1/extensions
.
You should see folders for authentication
, ingress
, logging
, monitoring
, registry
, and some YAML files. Run all of the commands in this procedure from this location.
Get the credentials of the shared services cluster on which to deploy Harbor.
tkg get credentials tkg-services
Set the context of kubectl
to the shared services cluster.
kubectl config use-context tkg-services-admin@tkg-services
Install the VMware Tanzu Mission Control extension manager on the cluster.
The Tanzu Kubernetes Grid extensions and Tanzu Mission Control both use the same extensions manager service. You must install the extensions manager even if you do not intend to use Tanzu Mission Control.
kubectl apply -f tmc-extension-manager.yaml
You should see confirmation that a namespace, resource definitions, a service account, an RBAC role, and a role binding for the extension-manager
service are all created.
namespace/vmware-system-tmc created
customresourcedefinition.apiextensions.k8s.io/agents.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensions.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionresourceowners.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionintegrations.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionconfigs.intents.tmc.cloud.vmware.com created
serviceaccount/extension-manager created
clusterrole.rbac.authorization.k8s.io/extension-manager-role created
clusterrolebinding.rbac.authorization.k8s.io/extension-manager-rolebinding created
service/extension-manager-service created
deployment.apps/extension-manager created
Install the Kapp controller on the shared services cluster.
kubectl apply -f kapp-controller.yaml
You should see confirmation that a service account, resource definition, and RBAC role are created for the kapp-controller
service.
serviceaccount/kapp-controller-sa created
customresourcedefinition.apiextensions.k8s.io/apps.kappctrl.k14s.io created
deployment.apps/kapp-controller created
clusterrole.rbac.authorization.k8s.io/kapp-controller-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/kapp-controller-cluster-role-binding created
Deploy cert-manager
, which provides automated certificate management, on the shared services cluster.
kubectl apply -f ../cert-manager/
Deploy Contour on the shared services cluster.
Harbor requires Contour to be present on the cluster, to provide ingress control. For information about deploying Contour, see Implementing Ingress Control with Contour.
Create a namespace for the Harbor service on the shared services cluster.
kubectl apply -f registry/harbor/namespace-role.yaml
You should see confirmation that a tanzu-system-registry
namespace, service account, and RBAC role bindings are created.
namespace/tanzu-system-registry created
serviceaccount/harbor-extension-sa created
role.rbac.authorization.k8s.io/harbor-extension-role created
rolebinding.rbac.authorization.k8s.io/harbor-extension-rolebinding created
clusterrole.rbac.authorization.k8s.io/harbor-extension-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/harbor-extension-cluster-rolebinding created
Your shared services cluster is now ready for you to deploy the Harbor service on it.
After you have deployed a shared services cluster and deployed Contour on it, you can deploy the Harbor service.
Make a copy of the harbor-data-values.yaml.example
file and name it harbor-data-values.yaml
.
cp registry/harbor/harbor-data-values.yaml.example registry/harbor/harbor-data-values.yaml
harbor-data-values.yaml
in a text editor.Set the mandatory passwords and secrets for the Harbor service.
You can do this in one of two ways:
bash registry/harbor/generate-passwords.sh registry/harbor/harbor-data-values.yaml
harbor-data-values.yaml
:
harborAdminPassword
secretKey
database.password
core.secret
core.xsrfKey
jobservice.secret
registry.secret
Specify other settings in harbor-data-values.yaml
.
hostname
setting to the hostname you want to use to access Harbor. For example: harbor.system.tanzu
.tls.crt
, tls.key
, and ca.crt
settings with the contents of your certificate, key, and CA certificate. The certificate can be signed by a trusted authority or be self-signed. If you leave these blank, Tanzu Kubernetes Grid automatically generates a self-signed certificate.generate-passwords.sh
script, optionally update the harborAdminPassword
with something that is easier to remember.Optionally update the persistence
settings to specify how Harbor stores data.
If you need to store a large quantity of container images in Harbor, set persistence.persistentVolumeClaim.registry.size
to a larger number.
If you do not update the storageClass
under persistence
settings, Harbor uses the shared services cluster's default storageClass
. If the default storageClass
or a storageClass
that you specify in harbor-data-values.yaml
supports the accessMode
ReadWriteMany
, you must update the persistence.persistentVolumeClaim
accessMode
settings for registry
, jobservice
, database
, redis
, and trivy
from ReadWriteOnce
to ReadWriteMany
. vSphere 7 with VMware vSAN 7 supports accessMode: ReadWriteMany
but vSphere 6.7u3 does not. If you are using vSphere 7 without vSAN, or you are using vSphere 6.7u3, use ReadWriteOnce
.
Optionally update the other Harbor settings. The settings that are available in harbor-data-values.yaml
are a subset of the settings that you set when deploying open source Harbor with Helm. For information about the other settings that you can configure, see Deploying Harbor with High Availability via Helm in the Harbor documentation.
Create a Kubernetes secret named harbor-data-values
with the values that you set in harbor-data-values.yaml
.
kubectl create secret generic harbor-data-values --from-file=values.yaml=registry/harbor/harbor-data-values.yaml -n tanzu-system-registry
Deploy the Harbor extension.
kubectl apply -f registry/harbor/harbor-extension.yaml
You should see the confirmation extension.clusters.tmc.cloud.vmware.com/harbor created
.
View the status of the Harbor extension.
kubectl get extension harbor -n tanzu-system-registry
You should see information about the Harbor extension.
NAME STATE HEALTH VERSION
harbor 3
View the status of the Harbor service itself.
kubectl get app harbor -n tanzu-system-registry
The status of the Harbor app should show Reconcile Succeeded
when Harbor has deployed successfully.
NAME DESCRIPTION SINCE-DEPLOY AGE
harbor Reconcile succeeded 111s 5m11s
If the status is not Reconcile Succeeded
, view the full status details of the Harbor service.
Viewing the full status can help you to troubleshoot the problem.
kubectl get app harbor -n tanzu-system-registry -o yaml
Check that the new services are running by listing all of the pods that are running in the cluster.
kubectl get pods -A
In the tanzu-system-regisry
namespace, you should see the harbor
core
, clair
, database
, jobservice
, notary
, portal
, redis
, registry
, and trivy
services running in a pod with names similar to harbor-registry-76b6ccbc75-vj4jv
.
NAMESPACE NAME READY STATUS RESTARTS AGE
[...]
tanzu-system-ingress contour-59687cc5ff-7mjdr 1/1 Running 79 4d22h
tanzu-system-ingress contour-59687cc5ff-dwv7t 1/1 Running 82 4d22h
tanzu-system-ingress envoy-pjwhm 1/2 Running 45 4d22h
tanzu-system-registry harbor-clair-7d87449c88-xmsgr 2/2 Running 0 5m54s
tanzu-system-registry harbor-core-845b69754d-flc8k 1/1 Running 0 5m54s
tanzu-system-registry harbor-database-0 1/1 Running 0 5m53s
tanzu-system-registry harbor-jobservice-864dfd7f57-bx8bq 1/1 Running 0 5m53s
tanzu-system-registry harbor-notary-server-5b959686cd-68sg5 1/1 Running 0 5m53s
tanzu-system-registry harbor-notary-signer-56bcf9846b-g6q99 1/1 Running 0 5m52s
tanzu-system-registry harbor-portal-84dd9c64-fhmh4 1/1 Running 0 5m52s
tanzu-system-registry harbor-redis-0 1/1 Running 0 5m52s
tanzu-system-registry harbor-registry-76b6ccbc75-vj4jv 2/2 Running 0 5m52s
tanzu-system-registry harbor-trivy-0 1/1 Running 0 5m51s
vmware-system-tmc extension-manager-d7cc7fcbb-lw4q4 1/1 Running 0 8d
vmware-system-tmc kapp-controller-7c98dff676-jm77p 1/1 Running 0 8d
Obtain the Harbor CA certificate from the harbor-tls
secret in the tanzu-system-registry namespace
.
kubectl -n tanzu-system-registry get secret harbor-tls -o=jsonpath="{.data.ca\.crt}" | base64 -d
Make a copy of the output. You will need this certificate in Install the Tanzu Kubernetes Grid Connectivity API on the Management Cluster.
The Harbor service is running in the shared services cluster. You must now apply a label to this cluster to identify it as a provider of a shared service.
After you have deployed Harbor on the shared services cluster, you must apply the tanzu-services
label to it, to inform the management cluster and other Tanzu Kubernetes clusters that this is the shared services cluster for the Tanzu Kubernetes Grid instance.
Set the context of kubectl
to the context of your management cluster.
For example, if your cluster is named mgmt-cluster
, run the following command.
kubectl config use-context mgmt-cluster-admin@mgmt-cluster
Set a cluster-role
label on the shared services cluster to mark this Tanzu Kubernetes cluster as a shared services cluster.
kubectl label cluster.cluster.x-k8s.io/tkg-services cluster-role.tkg.tanzu.vmware.com/tanzu-services="" --overwrite=true
You should see the confirmation cluster.cluster.x-k8s.io/tkg-services labeled
.
tkg get cluster
command. tkg get cluster --include-management-cluster
You should see that the tkg-services
cluster has the tanzu-services
label.
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
another-cluster default running 1/1 1/1 v1.19.1+vmware.2 <none>
tkg-services default running 1/1 1/1 v1.19.1+vmware.2 tanzu-services
mgmt-cluster tkg-system running 1/1 1/1 v1.19.1+vmware.2 management
The management cluster and the other Tanzu Kubernetes clusters that it manages now identify this cluster as the provider of shared services for the Tanzu Kubernetes Grid instance.
Before you can use the Harbor instance that you have deployed as a shared service, you must ensure that you can connect to it from the Kubernetes nodes running on the Tanzu Kubernetes clusters. To do this, you can deploy the Tanzu Kubernetes Grid Connectivity API on the management cluster. The connectivity API connects Kubernetes nodes that are running in Tanzu Kubernetes clusters to the Harbor shared service. It also injects the Harbor CA certificate and other connectivity API related configuration into Tanzu Kubernetes cluster nodes.
Use either the tar
command or the extraction tool of your choice to unpack the bundle of files for the Tanzu Kubernetes Grid connectivity API.
tar -xzf tkg-connectivity-manifests-v1.2.0-vmware.2.tar.gz
For convenience, unpack the bundle in the same location as the one from which you run tkg
and kubectl
commands. The bundle unpacks to a folder named manifests
.
Open the file manifests/tanzu-registry/values.yaml
in a text editor.
Update values.yaml
with information about your Harbor deployment and how clusters should connect to it.
registry.enabled
: If set to true
, connectivity to Harbor is configured in all node machines in the Tanzu Kubernetes Grid instance by enabling the tanzu-registry-webhook
. If not specified or set to false
, connectivity is not configured, for example for debugging purposes.registry.dnsOverride
:
true
, the DNS entry for the Harbor FQDN is overwritten by injecting entries into /etc/hosts
of every node machine. This is helpful when you cannot resolve the Harbor FQDN from the node machines. When set to true
the registry.vip
cannot be empty.false
, the DNS entry for the Harbor FQDN is used, and is expected to be resolvable from the node machines.registry.fqdn
: Set the FQDN to harbor.system.tanzu
, or whatever FQDN you specified as the hostname
in the harbor-data-values.yaml
file in Deploy Harbor on the Shared Services Cluster.registry.vip
: Set an IP address to use to connect to the shared services cluster. If you set registry.bootstrapProxy
to true
, registry.vip
can be any address that does not conflict with other addresses in your environment, for example 1.2.3.4
. If you set registry.bootstrapProxy
to false
, registry.vip
should be a routable address in the network, for example when you have an existing Load Balancer VIP in front of Harbor.registry.bootstrapProxy
:
true
, iptable
rules are added that proxy connections to the VIP to the Harbor cluster. The iptable
rules are created only for bootstrapping, when the kube-xx
images are pulled from the registry during cluster creation. The iptable
rules persist for as long as the cluster is running, so make sure the registry.vip
is not in conflict with any IP ranges that will be used in your environment, for example, a VIP range that is allocated for load balancers.false
, the IP address specified in registry.vip
should be a routable address in the network, for example when you have an existing Load Balancer VIP in front of Harbor.registry.rootCA
: Paste in the contents of the CA of the Harbor service, that you obtained in Deploy Harbor on the Tanzu Kubernetes Cluster. Make sure that the CA contents are indented by exactly 4 spaces.For example:
#@data/values
---
image: registry.tkg.vmware.run/tkg-connectivity/tanzu-registry-webhook:v1.2.0_vmware.2
imagePullPolicy: IfNotPresent
registry:
enabled: "true"
dnsOverride: "false"
fqdn: "harbor.system.tanzu"
vip: "1.2.3.4"
bootstrapProxy: "true"
rootCA: |
-----BEGIN CERTIFICATE-----
MIIDOzCCAiOgAwIBAgIQYON5MUM9BV9KBMLGkPUv1TANBgkqhkiG9w0BAQsFADAt
MRcwFQYDVQQKEw5Qcm9qZWN0IEhhcmJvcjESMBAGA1UEAxMJSGFyYm9yIENBMB4X
DTIwMTAxNDE2MDUzN1oXDTMwMTAxMjE2MDUzN1owLTEXMBUGA1UEChMOUHJvamVj
dCBIYXJib3IxEjAQBgNVBAMTCUhhcmJvciBDQTCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAMyo6F/yYjIcHpriLYFWwkD0/oRO2rAObVBB5TguG3wrk1mk
lT0nRFX0RPLCRhnTKbZusQERchkL1fcT6utPHmG+Aqv7MfnZB6Dpm8kbodl3REZ5
Je5JPCtGH8pwMBPd5YcHWltUefaEExccjngtdEjRMx8So75FQQVlkF5Q7m+++FZG
iSTpeX/3nUd3CiIYMxTBqgr32tDXQV2EKs+JxG2AVqq3s7AQkXCmCDsRGsYKDzC9
NYGpcHMLqxAnXxi0RE/e6rXVo9MxjSfRFs0zBcUpL3wv7aAclwSegiYzhLFgonvU
gfLLkKZKUxVKtW0+lBV7VSfwEq8i7qDYduwu+9MCAwEAAaNXMFUwDgYDVR0PAQH/
BAQDAgIEMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAPBgNVHRMBAf8E
BTADAQH/MBMGA1UdEQQMMAqCCGhhcmJvcmNhMA0GCSqGSIb3DQEBCwUAA4IBAQC9
G18HUVoLcXtaKHpVgOlS0ESxDE6kqIF9bP0Jt0+6JpoYBDXWfYqq2jWGAxA/mA0L
PNvkMMQ/cMNAdWCdRgqd/SqtXyhUXpYfO/4NAyCB0DADNwknYrq1wiTUCxFrCDZY
fhwUoSW1m7T41feZ1cxpn8j9nHIvIYfFwMXQzm7qBqj4Lu0Ocl3LQbvXHqcuRMFz
9/w+GwBfrlgelZlZN+f+Kar2C7Izz2HXxyhl+OhqjDpZOQKofSwpetp/bNqzYd61
wzI/VC1mWvdP1Z28yuvpnOvosEf7tvEvCY2A4Rh/mukFg5wMKknh/KwW7Hsc2L7n
O/dBaDrBpUksTNXf66wg
-----END CERTIFICATE-----
In a terminal, navigate to the location that contains the manifests
folder.
Run the following command to install the tanzu-registry-webhook
on the management cluster. This webhook is bundled with the Tanzu Kubernetes Grid connectivity API components.
Make sure that the context of kubectl
is still set to the management cluster, then run the following command.
ytt --ignore-unknown-comments -f manifests/tanzu-registry | kubectl apply -f -
You should see confirmation of the creation of the tanzu-system-registry
namespace, the tanzu-registry-webhook
, services, service accounts, and role bindings on the management cluster.
namespace/tanzu-system-registry created
issuer.cert-manager.io/tanzu-registry-webhook created
certificate.cert-manager.io/tanzu-registry-webhook-certs created
configmap/tanzu-registry-configuration created
service/tanzu-registry-webhook created
serviceaccount/tanzu-registry-webhook created
clusterrolebinding.rbac.authorization.k8s.io/tanzu-registry-webhook created
clusterrole.rbac.authorization.k8s.io/tanzu-registry-webhook created
deployment.apps/tanzu-registry-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/tanzu-registry-webhook created
If you already have a routable Load Balancer VIP to access to Harbor, you don't need the full TKG connectivity API component installed, the only part you need to deploy is this tanzu-registry-webhook
, to ensure the Harbor CA certificate is injected, or to injecting entries into /etc/hosts
of every node machine if you need to override the DNS. In this case you can jump ahead to Connect to the Harbor User Interface and skip the next steps that install the TKG connectivity operator. The labels and annotations on the Harbor httpproxy are also not required in this scenario.
Deploy the Tanzu Kubernetes Grid connectivity operator on the management cluster.
ytt -f manifests/tkg-connectivity-operator | kubectl apply -f -
You should see confirmation of the creation of the tanzu-system-connectivity
namespace and the tanzu-registry-operator
service operator.
namespace/tanzu-system-connectivity created
serviceaccount/tkg-connectivity-operator created
deployment.apps/tkg-connectivity-operator created
clusterrolebinding.rbac.authorization.k8s.io/tkg-connectivity-operator created
clusterrole.rbac.authorization.k8s.io/tkg-connectivity-operator created
configmap/tkg-connectivity-docker-image-config created
Set the context of kubectl
back to the shared services cluster on which you deployed Harbor.
kubectl config use-context tkg-services-admin@tkg-services
Annotate an HTTP proxy resource for Harbor with the IP address that you specified in registry.vip
in the manifests/tanzu-registry/values.yaml
.
For example, if you set registry.bootstrapProxy
to true
and registry.vip
to 1.2.3.4
, run the following command:
kubectl -n tanzu-system-registry annotate httpproxy harbor-httpproxy connectivity.tanzu.vmware.com/vip=1.2.3.4
Export the HTTP proxy resource for the Harbor service.
kubectl -n tanzu-system-registry label httpproxy harbor-httpproxy connectivity.tanzu.vmware.com/export=
Setting these two annotations enables the creation of a service in the shared services cluster with this VIP as the load balancer IP address. Any newly created workload container images will be pulled through this channel, so you could potentially use a different IP in the annotation.
The Tanzu connectivity API and webhook are now running in your management cluster. The Tanzu Kubernetes cluster on which Harbor is running is now recognized as the shared services cluster for this Tanzu Kubernetes Grid instance. Connections to the registry's virtual IP address are proxied to the Harbor shared services cluster.
The Harbor UI is exposed by the Envoy service load balancer that is running in the Contour extension on the shared services cluster. To allow users to connect to the Harbor UI, you must map the address of the Envoy service load balancer to the hostname of the Harbor service, for example harbor.system.tanzu
. How you map the address of the Envoy service load balancer to the hostname depends on whether your Tanzu Kubernetes Grid instance is running on vSphere, or on Amazon EC2 or Azure.
Obtain the address of the Envoy service load balancer.
$ kubectl get svc envoy -n tanzu-system-ingress -o jsonpath='{.status.loadBalancer.ingress[0]}'
On vSphere, the load balancer address is an IP address. On Amazon EC2, it has an address similar to a82ebae93a6fe42cd66d9e145e4fb292-1299077984.us-west-2.elb.amazonaws.com
. On Azure, it will be an IP address similar to 20.54.226.44
.
Map the address of the Envoy service load balancer to the hostname of the Harbor service.
vSphere: If you deployed Harbor on a shared services cluster that is running on vSphere, you must add an IP to hostname mapping in /etc/hosts
or in your DNS server. For example, if the IP address is 10.93.9.100
, add the following to /etc/hosts
:
10.93.9.100 harbor.system.tanzu
On Windows machines, the equivalent to /etc/hosts/
is C:\Windows\System32\Drivers\etc\hosts
.
Amazon EC2 or Azure: If you deployed Harbor on a shared services cluster that is running on Amazon EC2 or Azure, you must create two DNS CNAME
records for the Harbor hostname on a DNS server on the Internet.
CNAME
record for the Harbor hostname, for example, harbor.system.tanzu
, that you configured in harbor-data-values.yaml
, that points to the FQDN of the Envoy service load balancer.CNAME
record for the Notary service that is running in Harbor, that points to the FQDN of the Envoy service load balancer. For example, if the Harbor hostname is harbor.system.tanzu
, the CNAME
record is notary.harbor.system.tanzu
.Users can now connect to the Harbor UI by entering harbor.system.tanzu
in a Web browser.
Now that Harbor is set up as a shared service, you can push images to it to make them available to your Tanzu Kubernetes clusters to pull.
Save the certificate on your local machine.
/etc/docker/certs.d/harbor.system.tanzu/ca.crt
.Log in to the Harbor registry with the user admin
.
docker login harbor.system.tanzu -u admin
When prompted, enter the harborAdminPassword
that you set when you deployed the Harbor extension on the shared services cluster.
Tag an existing image that you have pulled locally, for example nginx:1.7.9
.
docker tag nginx:1.7.9 harbor.system.tanzu/library/nginx:1.7.9
Push the image to the Harbor registry.
docker push harbor.system.tanzu/library/nginx:1.7.9
Create a Kubernetes deployment on a workload cluster that uses the Docker image that you pushed to Harbor.
cat <<EOF | kubectl --kubeconfig ./workload-cls.kubeconfig apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: harbor.system.tanzu/library/nginx:1.7.9
ports:
- containerPort: 80
EOF
Validate that the deployment is running and using the images that it pulled from Harbor.
kubectl --kubeconfig ./workload-cls.kubeconfig get po
You should see that the nginx
pods are running.
NAME READY STATUS RESTARTS AGE
nginx-864499db5f-kd9l6 1/1 Running 0 13s
nginx-864499db5f-p4s9p 1/1 Running 0 13s
nginx-864499db5f-zr7nm 1/1 Running 0 13s
In order for shared services clusters that you deploy in this Tanzu Kubernetes Grid instance to pull images from the Harbor shared service rather than over the Internet, you must push those images into your new Harbor registry. This procedure is optional if you have internet connectivity to pull external images.
NOTE: If your Tanzu Kubernetes Grid instance is running in an Internet-restricted environment, you must perform these steps on a machine that has an Internet connection, that can also access the Harbor registry that you have just deployed as a shared service.
Set the FQDN of the Harbor registry that is running as a shared service as an environment variable.
On Windows platforms, use the SET
command instead of export
. Include the name of the default project in the variable. For example, if you set the Harbor hostname to harbor.system.tanzu
, set the following:
export TKG_CUSTOM_IMAGE_REPOSITORY=harbor.system.tanzu/library
Follow the procedure in Deploy Tanzu Kubernetes Grid to an Offline Environment to generate and run the publish-images.sh
script.
Perform steps 4 to 10.
When the script finishes, add or update the following rows in the ~/.tkg/config.yaml
file.
These variables ensure that Tanzu Kubernetes Grid always pulls images from the Harbor instance that is running as a shared service, rather than from the external internet.
TKG_CUSTOM_IMAGE_REPOSITORY: harbor.system.tanzu/library
TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: false
Because the tanzu connectivity webhook injects the Harbor CA certificate into cluster nodes, TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY:
should always be set to false
when using Harbor as a shared service.
If your Tanzu Kubernetes Grid instance is running in an Internet-restricted environment, you can disconnect the Internet connection now.
You can now use the tkg create cluster
command to deploy Tanzu Kubernetes clusters, and the images will be pulled from the Harbor registry that is running in the shared services cluster. You can push images to the Harbor registry to make them available to all clusters that are running in the Tanzu Kubernetes Grid instance. Connections between Harbor and containerd
are secure, regardless of whether you use a trusted or a self-signed certificate for Harbor.
If you need to make changes to the configuration of the Harbor extension after deployment, follow these steps to update your deployed Harbor extension.
Update the Harbor configuration in registry/harbor/harbor-data-values.yaml
.
Update the Kubernetes secret, which contains the Harbor configuration.
This command assumes that you are running it from tkg-extensions-v1.2.0+vmware.1/extensions
.
kubectl create secret generic harbor-data-values --from-file=values.yaml=registry/harbor/harbor-data-values.yaml -n tanzu-system-registry -o yaml --dry-run | kubectl replace -f-
Note that the final -
on the kubectl replace
command above is necessary to instruct kubectl
to accept the input being piped to it from the kubectl create secret
command.
The Harbor extension will be reconciled using the new values you just added. The changes should show up in five minutes or less. This is handled by the Kapp controller, which synchronizes every five minutes.