You can deploy Tanzu Kubernetes Grid management clusters and Tanzu Kubernetes (workload) clusters in environments that are not connected to the Internet, such as:

  • Proxied environments
  • Airgapped environments, with no physical connection to the Internet

This topic explains how to deploy management clusters to internet-restricted environments on vSphere or AWS.

You do not need to perform these procedures if you are using Tanzu Kubernetes Grid in a connected environment that can pull images over an external Internet connection.

General Prerequisites

Before you can deploy management clusters and Tanzu Kubernetes clusters in an Internet-restricted environment, you must have:

  • An Internet-connected Linux bootstrap machine that:
    • Is not inside the internet-restricted environment.
    • Has minimum 2 GB RAM, 2 vCPU and 30 GB hard disk space.
    • Has the Docker client app installed.
    • Has the Tanzu CLI installed. See Install the Tanzu CLI and Other Tools to download, unpack, and install the Tanzu CLI binary on your Internet-connected system.
    • Has imgpkg installed. (Not required if you plan on using Harbor registry)
    • Has a version of yq installed that is equal to or above v4.9.2.
    • If you intend to install one or more of the optional packages provided by Tanzu Kubernetes Grid, for example, Harbor, the Carvel tools are installed. For more information, see Install the Carvel Tools.
  • A way for cluster VMs to access images in the private registry:
    • Proxied environments: An egress proxy server that lets cluster VMs access the registry.
      • When you deploy a management cluster in this proxied environment, set TKG_*_PROXY variables in the cluster configuration file to the proxy server's address. See Configure Proxies.
    • Airgapped environments: A USB thumb drive or other medium for bringing the private registry behind an airgap, after the registry is populated with images. See Copy Images into an Airgapped Environment.

vSphere Prerequisites and Architecture

vSphere Architecture

An internet-restricted Tanzu Kubernetes Grid installation on vSphere has firewalls and communication between major components as shown here.

Diagram: Airgapped TKG on vSphere

On vSphere, in addition to the general prerequisites above, you must:

  • Upload to vSphere the OVAs from which node VMs are created. See Import the Base Image Template into vSphere in Deploy Management Clusters to vSphere.

    After the VM is created, if you cannot log in with the default username/password, reset the password using Gnu GRUB, as described in Resetting a Lost Root Password if it is Photon OS.

  • Log in to the jumpbox as root, and enable remote ssh as follows:

    1. Open the file /etc/ssh/sshd_config in an editor. nano /etc/ssh/sshd_config
    2. Add a line in the Authentication section of the file that says PermitRootLogin yes. In this case the line exist, remove the "#".
    3. Save the updated /etc/ssh/sshd_config file.
    4. Restart the SSH server using service sshd restart
  • Install and configure a private Docker-compatible container registry such as Harbor, Docker, or Artifactory as follows. This registry runs outside of Tanzu Kubernetes Grid and is separate from any registry deployed as a shared service for clusters:

    • For vSphere, install the registry within your firewall.
    • You can configure the container registry with SSL certificates signed by a trusted CA, or with self-signed certificates.
    • The registry must not implement user authentication. For example, if you use a Harbor registry, the project must be public, and not private.
    • To install Harbor:
      1. Download the binaries for the latest Harbor release.
      2. Follow the Harbor Installation and Configuration instructions in the Harbor documentation.
  • Configure an offline subnet to use as the internet-restricted environment, and associate it with the jumpbox.

  • Set up the DHCP server to allocate private IP's to the new instance.

  • Create a vSphere distributed switch on a data center to handle the networking configuration of multiple hosts at a time from a central place.

Amazon EC2 Prerequisites and Architecture

Amazon EC2 Architecture

An internet-restricted Tanzu Kubernetes Grid installation on Amazon EC2 has firewalls and communication between major components as shown here. Security Groups (SG) are automatically created between the control plane and workload domains, and between the workload components and control plane components.

Diagram: Airgapped TKG on AWS

For an Internet-restricted installation on Amazon EC2, in addition to the general prerequisites above, you also need:

  • An Amazon EC2 VPC with no internet gateway ("offline VPC") configured as described below.
    • Your internet-connected bootstrap machine must be able to access IP addresses within this offline VPC. For more information, see VPC Peering.
  • A private Docker-compatible container registry such as Harbor, Docker, or Artifactory installed and configured as follows. This registry runs outside of Tanzu Kubernetes Grid and is separate from any registry deployed as a shared service for clusters:
    • For vSphere, install the registry within your firewall.
    • You can configure the container registry with SSL certificates signed by a trusted CA, or with self-signed certificates.
    • The registry must not implement user authentication. For example, if you use a Harbor registry, the project must be public, and not private.
    • To install Harbor:
      1. Download the binaries for the latest Harbor release.
      2. Follow the Harbor Installation and Configuration instructions in the Harbor documentation.
  • A Linux bootstrap VM running within your offline VPC, provisioned similarly to the internet-connected machine above.
    • The offline bootstrap VM must be able to reach cluster VMs created by Tanzu Kubernetes Grid directly, without a proxy.

After you create the offline VPC, you must add following endpoints to it (VPC endpoint enables private connections between your VPC and supported AWS services):

  • Service endpoints:
    • sts
    • ssm
    • ec2
    • ec2messages
    • elasticloadbalancing
    • secretsmanager
    • ssmmessages

To add the service endpoints to your VPC:

  1. In the AWS console, browse to VPC Dashboard > Endpoints
  2. For each of the above services
    1. Click Create Endpoint
    2. Search for the service and select it under Service Name
    3. Select your VPC and its Subnets
    4. Enable DNS Name for the endpoint
    5. Select a Security group that allows VMs in the VPC to access the endpoint
    6. Select Policy > Full Access
    7. Click Create endpoint

Step 1: Prepare Environment

The following procedures apply both for the initial deployment of Tanzu Kubernetes Grid in an internet-restricted environment and to upgrading an existing internet-restricted Tanzu Kubernetes Grid deployment.

  1. On the machine with an Internet connection on which you installed the Tanzu CLI, run the tanzu init and tanzu management-cluster create commands.

    • The tanzu management-cluster create command does not need to complete.

    Running tanzu init and tanzu management-cluster create for the first time installs the necessary Tanzu Kubernetes Grid configuration files in the ~/.config/tanzu/tkg folder on your system. The script that you create and run in subsequent steps requires the Bill of Materials (BoM) YAML files in the ~/.config/tanzu/tkg/bom folder to be present on your machine. The scripts in this procedure use the BoM files to identify the correct versions of the different Tanzu Kubernetes Grid component images to pull.

  2. Set the IP address or FQDN of your local registry as an environment variable:

    export TKG_CUSTOM_IMAGE_REPOSITORY="PRIVATE-REGISTRY"
    

    Where PRIVATE-REGISTRY is the IP address or FQDN of your private registry and the name of the project. For example, custom-image-repository.io/yourproject.

    On Windows platforms, use the SET command instead of export.

  3. Set the repository from which to fetch Bill of Materials (BoM) YAML files.

    export TKG_IMAGE_REPO="projects.registry.vmware.com/tkg"
    
  4. If your private Docker registry uses a self-signed certificate, provide the CA certificate in base64 encoded format e.g. base64 -w 0 your-ca.crt .

    export TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE=LS0t[...]tLS0tLQ==
    

    If you specify the CA certificate in this option, it is automatically injected into all Tanzu Kubernetes clusters that you create in this Tanzu Kubernetes Grid instance.

    On Windows platforms, use the SET command instead of export.

  5. If your airgapped environment has a DNS server, check that it includes an entry for your private Docker registry.
    If your environment lacks a DNS server, modify overlay files as follows to add the registry into the /etc/hosts files of the TKr Controller and all control plane and worker nodes:

    • Add the following to the ytt overlay file for your infrastructure, ~/.config/tanzu/tkg/providers/infrastructure-IAAS/ytt/IAAS-overlay.yaml where IAAS is vsphere, aws, or azure.

      #@ load("@ytt:overlay", "overlay")
      #@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
      ---
      apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
      kind: KubeadmControlPlane
      spec:
        kubeadmConfigSpec:
          preKubeadmCommands:
          #! Add nameserver to all k8s nodes
          #@overlay/append
          - echo "PRIVATE-REGISTRY-IP   PRIVATE-REGISTRY-HOSTNAME" >> /etc/hosts
      #@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
      ---
      apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
      kind: KubeadmConfigTemplate
      spec:
        template:
          spec:
            preKubeadmCommands:
            #! Add nameserver to all k8s nodes
            #@overlay/append
            - echo "PRIVATE-REGISTRY-IP   PRIVATE-REGISTRY-HOSTNAME" >> /etc/hosts
      

      Where PRIVATE-REGISTRY-IP and PRIVATE-REGISTRY-HOSTNAME are the IP address and name of your private Docker registry.

    • In your TKr Controller customization overlay file, ~/.config/tanzu/tkg/providers/ytt/03_customizations/01_tkr/tkr_overlay.lib.yaml add the following into the spec.template.spec section, before the containers block and at the same indent level:

      #@overlay/match missing_ok=True
      hostAliases:
      - ip: PRIVATE-REGISTRY-IP
        hostnames:
        - PRIVATE-REGISTRY-HOSTNAME
      

      Where PRIVATE-REGISTRY-IP and PRIVATE-REGISTRY-HOSTNAME are the IP address and name of your private Docker registry.

If you are using Harbor, you can use Configure Proxy Cache feature and skip Step 2 and Step 3.

Step 2: Generate the publish-images Script

Note: In a physically airgapped environment, follow the procedure in Copy Images into an Airgapped Environment instead of this step and Step 3: Run the publish-images Script below.

  1. Copy and paste the following shell script in a text editor, and save it as gen-publish-images.sh.

    #!/bin/bash
    # Copyright 2021 VMware, Inc. All Rights Reserved.
    # SPDX-License-Identifier: Apache-2.0
    
    set -eo pipefail
    
    TANZU_BOM_DIR=${HOME}/.config/tanzu/tkg/bom
    INSTALL_INSTRUCTIONS='See https://github.com/mikefarah/yq#install for installation instructions'
    TKG_CUSTOM_IMAGE_REPOSITORY=${TKG_CUSTOM_IMAGE_REPOSITORY:-''}
    TKG_IMAGE_REPO=${TKG_IMAGE_REPO:-''}
    
    
    echodual() {
      echo "$@" 1>&2
      echo "#" "$@"
    }
    
    if [ -z "$TKG_CUSTOM_IMAGE_REPOSITORY" ]; then
      echo "TKG_CUSTOM_IMAGE_REPOSITORY variable is required but is not defined" >&2
      exit 1
    fi
    
    if [ -z "$TKG_IMAGE_REPO" ]; then
      echo "TKG_IMAGE_REPO variable is required but is not defined" >&2
      exit 2
    fi
    
    if ! [ -x "$(command -v imgpkg)" ]; then
      echo 'Error: imgpkg is not installed.' >&2
      exit 3
    fi
    
    if ! [ -x "$(command -v yq)" ]; then
      echo 'Error: yq is not installed.' >&2
      echo "${INSTALL_INSTRUCTIONS}" >&2
      exit 3
    fi
    
    function imgpkg_copy() {
        flags=$1
        src=$2
        dst=$3
        echo ""
        echo "imgpkg copy $flags $src --to-repo $dst"
    }
    
    if [ -n "$TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE" ]; then
      echo $TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE > /tmp/cacrtbase64
      base64 -d /tmp/cacrtbase64 > /tmp/cacrtbase64d.crt
      function imgpkg_copy() {
          flags=$1
          src=$2
          dst=$3
          echo ""
          echo "imgpkg copy $flags $src --to-repo $dst --registry-ca-cert-path /tmp/cacrtbase64d.crt"
      }
    fi
    
    echo "set -eo pipefail"
    echodual "Note that yq must be version above or equal to version 4.9.2 and below version 5."
    
    actualImageRepository="$TKG_IMAGE_REPO"
    
    # Iterate through TKG BoM file to create the complete Image name
    # and then pull, retag and push image to custom registry.
    list=$(imgpkg  tag  list -i "${actualImageRepository}"/tkg-bom)
    for imageTag in ${list}; do
      tanzucliversion=$(tanzu version | head -n 1 | cut -c10-15)
      if [[ ${imageTag} == ${tanzucliversion}* ]]; then
        TKG_BOM_FILE="tkg-bom-${imageTag//_/+}.yaml"
        imgpkg pull --image "${actualImageRepository}/tkg-bom:${imageTag}" --output "tmp" > /dev/null 2>&1
        echodual "Processing TKG BOM file ${TKG_BOM_FILE}"
    
        actualTKGImage=${actualImageRepository}/tkg-bom:${imageTag}
        customTKGImage=${TKG_CUSTOM_IMAGE_REPOSITORY}/tkg-bom
        imgpkg_copy "-i" $actualTKGImage $customTKGImage
    
        # Get components in the tkg-bom.
        # Remove the leading '[' and trailing ']' in the output of yq.
        components=(`yq e '.components | keys | .. style="flow"' "tmp/$TKG_BOM_FILE" | sed 's/^.//;s/.$//'`)
        for comp in "${components[@]}"
        do
        # remove: leading and trailing whitespace, and trailing comma
        comp=`echo $comp | sed -e 's/^[[:space:]]*//' | sed 's/,*$//g'`
        get_comp_images="yq e '.components[\"${comp}\"][]  | select(has(\"images\"))|.images[] | .imagePath + \":\" + .tag' "\"tmp/\"$TKG_BOM_FILE""
    
        flags="-i"
        if [ $comp = "tkg-standard-packages" ]; then
          flags="-b"
        fi
        eval $get_comp_images | while read -r image; do
            actualImage=${actualImageRepository}/${image}
            image2=$(echo "$image" | cut -f1 -d":")
            customImage=$TKG_CUSTOM_IMAGE_REPOSITORY/${image2}
            imgpkg_copy $flags $actualImage $customImage
          done
        done
    
        rm -rf tmp
        echodual "Finished processing TKG BOM file ${TKG_BOM_FILE}"
        echo ""
      fi
    done
    
    # Iterate through TKR BoM file to create the complete Image name
    # and then pull, retag and push image to custom registry.
    list=$(imgpkg  tag  list -i ${actualImageRepository}/tkr-bom)
    for imageTag in ${list}; do
      if [[ ${imageTag} == v* ]]; then
        TKR_BOM_FILE="tkr-bom-${imageTag//_/+}.yaml"
        echodual "Processing TKR BOM file ${TKR_BOM_FILE}"
    
        actualTKRImage=${actualImageRepository}/tkr-bom:${imageTag}
        customTKRImage=${TKG_CUSTOM_IMAGE_REPOSITORY}/tkr-bom
        imgpkg_copy "-i" $actualTKRImage $customTKRImage
        imgpkg pull --image ${actualImageRepository}/tkr-bom:${imageTag} --output "tmp" > /dev/null 2>&1
    
        # Get components in the tkr-bom.
        # Remove the leading '[' and trailing ']' in the output of yq.
        components=(`yq e '.components | keys | .. style="flow"' "tmp/$TKR_BOM_FILE" | sed 's/^.//;s/.$//'`)
        for comp in "${components[@]}"
        do
        # remove: leading and trailing whitespace, and trailing comma
        comp=`echo $comp | sed -e 's/^[[:space:]]*//' | sed 's/,*$//g'`
        get_comp_images="yq e '.components[\"${comp}\"][]  | select(has(\"images\"))|.images[] | .imagePath + \":\" + .tag' "\"tmp/\"$TKR_BOM_FILE""
    
        flags="-i"
        if [ $comp = "tkg-core-packages" ]; then
          flags="-b"
        fi
        eval $get_comp_images | while read -r image; do
            actualImage=${actualImageRepository}/${image}
            image2=$(echo "$image" | cut -f1 -d":")
            customImage=$TKG_CUSTOM_IMAGE_REPOSITORY/${image2}
            imgpkg_copy $flags $actualImage $customImage
          done
        done
    
        rm -rf tmp
        echodual "Finished processing TKR BOM file ${TKR_BOM_FILE}"
        echo ""
      fi
    done
    
    list=$(imgpkg  tag  list -i ${actualImageRepository}/tkr-compatibility)
    for imageTag in ${list}; do
      if [[ ${imageTag} == v* ]]; then
        echodual "Processing TKR compatibility image"
        actualImage=${actualImageRepository}/tkr-compatibility:${imageTag}
        customImage=$TKG_CUSTOM_IMAGE_REPOSITORY/tkr-compatibility
        imgpkg_copy "-i" $actualImage $customImage
        echo ""
        echodual "Finished processing TKR compatibility image"
      fi
    done
    
    list=$(imgpkg  tag  list -i ${actualImageRepository}/tkg-compatibility)
    for imageTag in ${list}; do
      if [[ ${imageTag} == v* ]]; then
        echodual "Processing TKG compatibility image"
        actualImage=${actualImageRepository}/tkg-compatibility:${imageTag}
        customImage=$TKG_CUSTOM_IMAGE_REPOSITORY/tkg-compatibility
        imgpkg_copy "-i" $actualImage $customImage
        echo ""
        echodual "Finished processing TKG compatibility image"
      fi
    done
    
  2. Make the gen-publish-images script executable.

    chmod +x gen-publish-images.sh
    
  3. Generate a publish-images shell script that is populated with the address of your private Docker registry.

    ./gen-publish-images.sh > publish-images.sh
    
  4. Verify that the generated script contains the correct registry address.

    cat publish-images.sh
    

Step 3: Run the publish-images Script

  1. Make the publish-images script executable.

    chmod +x publish-images.sh
    
  2. Log in to your local private registry.

    docker login ${TKG_CUSTOM_IMAGE_REPOSITORY}
    
  3. Run the publish-images script to pull the required images from the public Tanzu Kubernetes Grid registry and push them to your private registry.

    ./publish-images.sh
    

    If your registry lacks sufficient storage for all images in the publish-images script, re-generate and re-run the script after either:

  4. When the script finishes, do the following, depending on your infrastructure:

    • vSphere: Turn off your Internet connection.

Step 4: Set Environment Variables

As long as the TKG_CUSTOM_IMAGE_REPOSITORY variable remains set, when you deploy clusters, Tanzu Kubernetes Grid will pull images from your local private registry rather than from the external public registry. To make sure that Tanzu Kubernetes Grid always pulls images from the local private registry, add TKG_CUSTOM_IMAGE_REPOSITORY to the global cluster configuration file, ~/.config/tanzu/tkg/config.yaml. If your Docker registry uses self-signed certificates, also add TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE to the global cluster configuration file. Provide the CA certificate in base64 encoded format by executing base64 -w 0 your-ca.crt.

TKG_CUSTOM_IMAGE_REPOSITORY: custom-image-repository.io/yourproject
TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: true

If you specify TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE, set TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY to false and provide the CA certificate in base64 encoded format by executing base64 -w 0 your-ca.crt.

TKG_CUSTOM_IMAGE_REPOSITORY: custom-image-repository.io/yourproject
TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: false
TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: LS0t[...]tLS0tLQ==

Step 5: Initialize Tanzu Kubernetes Grid

  1. If your offline bootstrap machine does not have a ~/.config/tanzu or ~/.config/tanzu/bom directory, run tanzu config init to create the directories. The tanzu CLI saves the BoM files and provider files from the image repository into the ~/.config/tanzu/tkg/bom and ~/.config/tanzu/tkg/providers directories, respectively.

  2. If your airgapped environment has a DNS server, check that it includes an entry for your private Docker registry.
    If your environment lacks a DNS server, modify overlay files as follows to add the registry into the /etc/hosts files of the TKr Controller and all control plane and worker nodes:

    • Add the following to the ytt overlay file for your infrastructure, ~/.config/tanzu/tkg/providers/infrastructure-IAAS/ytt/IAAS-overlay.yaml where IAAS is vsphere, aws, or azure.

      #@ load("@ytt:overlay", "overlay")
      
      #@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
      ---
      apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
      kind: KubeadmControlPlane
      spec:
        kubeadmConfigSpec:
          preKubeadmCommands:
          #! Add nameserver to all k8s nodes
          #@overlay/append
          - echo "PRIVATE-REGISTRY-IP   PRIVATE-REGISTRY-HOSTNAME" >> /etc/hosts
      
      #@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
      ---
      apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
      kind: KubeadmConfigTemplate
      spec:
        template:
          spec:
            preKubeadmCommands:
            #! Add nameserver to all k8s nodes
            #@overlay/append
            - echo "PRIVATE-REGISTRY-IP   PRIVATE-REGISTRY-HOSTNAME" >> /etc/hosts
      

      Where PRIVATE-REGISTRY-IP and PRIVATE-REGISTRY-HOSTNAME are the IP address and name of your private Docker registry.

    • In your TKr Controller customization overlay file, ~/.config/tanzu/tkg/providers/ytt/03_customizations/01_tkr/tkr_overlay.lib.yaml add the following into the spec.template.spec section, before the containers block and at the same indent level:

            #@overlay/match missing_ok=True
            hostAliases:
            - ip: PRIVATE-REGISTRY-IP
              hostnames:
              - PRIVATE-REGISTRY-HOSTNAME
      

      Where PRIVATE-REGISTRY-IP and PRIVATE-REGISTRY-HOSTNAME are the IP address and name of your private Docker registry.

  3. (Amazon EC2) On the offline machine, customize your AWS management cluster's load balancer template to use an internal scheme, avoiding the need for a public-facing load balancer. To do this, add the following into the overlay file ~/.config/tanzu/tkg/providers/ytt/03_customizations/internal_lb.yaml:

    #@ load("@ytt:overlay", "overlay")
    #@ load("@ytt:data", "data")
    
    #@overlay/match by=overlay.subset({"kind":"AWSCluster"})
    ---
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
    kind: AWSCluster
    spec:
    #@overlay/match missing_ok=True
      controlPlaneLoadBalancer:
    #@overlay/match missing_ok=True
         scheme: "internal"
    

What to Do Next

Your Internet-restricted environment is now ready for you to deploy or upgrade Tanzu Kubernetes Grid management clusters and start deploying Tanzu Kubernetes clusters on vSphere or Amazon EC2.

You can also optionally deploy the Tanzu Kubernetes Grid packages and use the Harbor shared service instead of your private Docker registry.

Using the Harbor Shared Service in Internet-Restricted Environments

In Internet-restricted environments, you can set up Harbor as a shared service so that your Tanzu Kubernetes Grid instance uses it instead of an external registry. As described in the procedures above, to deploy Tanzu Kubernetes Grid in an Internet-restricted environment, you must have a private container registry running in your environment before you can deploy a management cluster. This private registry is a central registry that is part of your infrastructure and available to your whole environment, but is not necessarily based on Harbor or supported by VMware. This private registry is not a Tanzu Kubernetes Grid shared service; you deploy that registry later.

After you use this central registry to deploy a management cluster in an Internet-restricted environment, you configure Tanzu Kubernetes Grid so that Tanzu Kubernetes clusters pull images from the central registry rather than from the external Internet. If the central registry uses a trusted CA certificate, connections between Tanzu Kubernetes clusters and the registry are secure. If your central registry uses self-signed certificates, you can disable TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY and specify the TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE option. Setting this option automatically injects your self-signed certificates into your Tanzu Kubernetes clusters.

In either case, after you use your central registry to deploy a management cluster in an Internet-restricted environment, VMware recommends deploying the Harbor shared service in your Tanzu Kubernetes Grid instance and then configuring Tanzu Kubernetes Grid so that Tanzu Kubernetes clusters pull images from the Harbor shared service managed by Tanzu Kubernetes Grid, rather than from the central registry.

On infrastructures with load balancing, VMware recommends installing the External DNS service alongside the Harbor service, as described in Harbor Registry and External DNS.

For information about how to deploy the Harbor shared service, see Deploying Harbor Registry as a Shared Service.

check-circle-line exclamation-circle-line close-line
Scroll to top icon