Prepare an Internet-Restricted Environment

You can deploy Tanzu Kubernetes Grid management clusters and Tanzu Kubernetes (workload) clusters in environments that are not connected to the Internet, such as:

  • Proxied environments
  • Airgapped environments, with no physical connection to the Internet

This topic explains how to deploy management clusters to internet-restricted environments on vSphere or AWS.

You do not need to perform these procedures if you are using Tanzu Kubernetes Grid in a connected environment that can pull images over an external Internet connection.

General Prerequisites

Before you can deploy management clusters and Tanzu Kubernetes clusters in an Internet-restricted environment, you must have:

  • An Internet-connected Linux bootstrap machine that:
    • Is not inside the internet-restricted environment or can access the domains listed in Proxy Server Allowlist.
    • Has minimum 2 GB RAM, 2 vCPU and 30 GB hard disk space.
    • Has the Docker client app installed.
    • Has the Tanzu CLI installed. See Install the Tanzu CLI and Other Tools to download, unpack, and install the Tanzu CLI binary on your Internet-connected system.
    • Has imgpkg installed. (Not required if you plan on using the Harbor registry.)
    • Has a version of yq installed that is equal to or above v4.9.2.
    • If you intend to install one or more of the optional packages provided by Tanzu Kubernetes Grid, for example, Harbor, the Carvel tools are installed. For more information, see Install the Carvel Tools.
  • A way for cluster VMs to access images in the private registry:
    • Proxied environments: An egress proxy server that lets cluster VMs access the registry.
      • When you deploy a management cluster in this proxied environment, set TKG_*_PROXY variables in the cluster configuration file to the proxy server’s address, and set TKG_PROXY_CA_CERT to the proxy server’s CA if its certificate is self-signed. See Configure Proxies.
    • Airgapped environments: A USB thumb drive or other medium for bringing the private registry behind an airgap, after the registry is populated with images. See Copy Images into an Airgapped Environment.

vSphere Prerequisites and Architecture

vSphere Architecture

An internet-restricted Tanzu Kubernetes Grid installation on vSphere has firewalls and communication between major components as shown here.

Diagram: Airgapped TKG on vSphere

On vSphere, in addition to the general prerequisites above, you must:

  • Upload to vSphere the OVAs from which node VMs are created. See Import the Base Image Template into vSphere in Deploy Management Clusters to vSphere.

    After the VM is created, if you cannot log in with the default username/password, reset the password using Gnu GRUB, as described in Resetting a Lost Root Password if it is Photon OS.

  • Log in to the jumpbox as root, and enable remote ssh as follows:

    1. Open the file /etc/ssh/sshd_config in an editor. nano /etc/ssh/sshd_config
    2. Add a line in the Authentication section of the file that says PermitRootLogin yes. In this case the line exist, remove the “#”.
    3. Save the updated /etc/ssh/sshd_config file.
    4. Restart the SSH server using service sshd restart
  • Install and configure a private Docker-compatible container registry such as Harbor, Docker, or Artifactory as follows. This registry runs outside of Tanzu Kubernetes Grid and is separate from any registry deployed as a shared service for clusters:

    • Install the registry within your firewall.
    • You can configure the container registry with SSL certificates signed by a trusted CA, or with self-signed certificates.
    • The registry must not implement user authentication. For example, if you use a Harbor registry, the project must be public, and not private.
    • To install Harbor:
      1. Download the binaries for the latest Harbor release.
      2. Follow the Harbor Installation and Configuration instructions in the Harbor documentation.
  • Configure an offline subnet to use as the internet-restricted environment, and associate it with the jumpbox.

  • Set up the DHCP server to allocate private IP’s to the new instance.

  • Create a vSphere distributed switch on a data center to handle the networking configuration of multiple hosts at a time from a central place.

Amazon EC2 Prerequisites and Architecture

Amazon EC2 Architecture

A proxied Tanzu Kubernetes Grid installation on Amazon EC2 has firewalls and communication between major components as shown here. Security Groups (SG) are automatically created between the control plane and workload domains, and between the workload components and control plane components.

Diagram: Airgapped TKG on AWS

For a proxied installation on Amazon EC2, in addition to the general prerequisites above, you also need:

  • An Amazon EC2 VPC with no internet gateway (“offline VPC”) configured as described below.
    • Your internet-connected bootstrap machine must be able to access IP addresses within this offline VPC. For more information, see VPC Peering.
  • A private Docker-compatible container registry such as Harbor, Docker, or Artifactory installed and configured as follows. This registry runs outside of Tanzu Kubernetes Grid and is separate from any registry deployed as a shared service for clusters:
    • Install the registry within your firewall.
    • You can configure the container registry with SSL certificates signed by a trusted CA, or with self-signed certificates.
    • The registry must not implement user authentication. For example, if you use a Harbor registry, the project must be public, and not private.
    • To install Harbor:
      1. Download the binaries for the latest Harbor release.
      2. Follow the Harbor Installation and Configuration instructions in the Harbor documentation.
  • A Linux bootstrap VM running within your offline VPC, provisioned similarly to the internet-connected machine above.
    • The offline bootstrap VM must be able to reach cluster VMs created by Tanzu Kubernetes Grid directly, without a proxy.

After you create the offline VPC, you must add following endpoints to it (VPC endpoint enables private connections between your VPC and supported AWS services):

  • Service endpoints:
    • sts
    • ssm
    • ec2
    • ec2messages
    • elasticloadbalancing
    • secretsmanager
    • ssmmessages

To add the service endpoints to your VPC:

  1. In the AWS console, browse to VPC Dashboard > Endpoints
  2. For each of the above services
    1. Click Create Endpoint
    2. Search for the service and select it under Service Name
    3. Select your VPC and its Subnets
    4. Enable DNS Name for the endpoint
    5. Select a Security group that allows VMs in the VPC to access the endpoint
    6. Select Policy > Full Access
    7. Click Create endpoint

Step 1: Prepare Environment

The following procedures apply both for the initial deployment of Tanzu Kubernetes Grid in an internet-restricted environment and to upgrading an existing internet-restricted Tanzu Kubernetes Grid deployment.

  1. On the machine with an Internet connection on which you installed the Tanzu CLI, run the tanzu init and tanzu mc create commands.

    • The Tanzu CLI alias mc is short for management-cluster.
    • The tanzu mc create command does not need to complete.

    Running tanzu init and tanzu mc create for the first time installs the necessary Tanzu Kubernetes Grid configuration files in the ~/.config/tanzu/tkg folder on your system. The script that you create and run in subsequent steps requires the Bill of Materials (BoM) YAML files in the ~/.config/tanzu/tkg/bom folder to be present on your machine. The scripts in this procedure use the BoM files to identify the correct versions of the different Tanzu Kubernetes Grid component images to pull.

  2. Set the IP address or FQDN of your local registry as an environment variable:

    export TKG_CUSTOM_IMAGE_REPOSITORY="PRIVATE-REGISTRY"
    

    Where PRIVATE-REGISTRY is the IP address or FQDN of your private registry and the name of the project. For example, custom-image-repository.io/yourproject.

    On Windows platforms, use the SET command instead of export.

  3. Set the repository from which to fetch Bill of Materials (BoM) YAML files.

    export TKG_IMAGE_REPO="projects.registry.vmware.com/tkg"
    
  4. If your private Docker registry uses a self-signed certificate, provide the CA certificate in base64 encoded format e.g. base64 -w 0 your-ca.crt .

    export TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE=LS0t[...]tLS0tLQ==
    

    If you specify the CA certificate in this option, it is automatically injected into all Tanzu Kubernetes clusters that you create in this Tanzu Kubernetes Grid instance.

    On Windows platforms, use the SET command instead of export.

  5. If your airgapped environment has a DNS server, check that it includes an entry for your private Docker registry.
    If your environment lacks a DNS server, modify overlay files as follows to add the registry into the /etc/hosts files of the TKr Controller and all control plane and worker nodes:

    • Add the following to the ytt overlay file for your infrastructure, ~/.config/tanzu/tkg/providers/infrastructure-IAAS/ytt/IAAS-overlay.yaml where IAAS is vsphere, aws, or azure.

      #@ load("@ytt:overlay", "overlay")
      #@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
      ---
      apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
      kind: KubeadmControlPlane
      spec:
        kubeadmConfigSpec:
          preKubeadmCommands:
          #! Add nameserver to all k8s nodes
          #@overlay/append
          - echo "PRIVATE-REGISTRY-IP   PRIVATE-REGISTRY-HOSTNAME" >> /etc/hosts
      #@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
      ---
      apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
      kind: KubeadmConfigTemplate
      spec:
        template:
          spec:
            preKubeadmCommands:
            #! Add nameserver to all k8s nodes
            #@overlay/append
            - echo "PRIVATE-REGISTRY-IP   PRIVATE-REGISTRY-HOSTNAME" >> /etc/hosts
      

      Where PRIVATE-REGISTRY-IP and PRIVATE-REGISTRY-HOSTNAME are the IP address and name of your private Docker registry.

    • In your TKr Controller customization overlay file, ~/.config/tanzu/tkg/providers/ytt/03_customizations/01_tkr/tkr_overlay.lib.yaml add the following into the spec.template.spec section, before the containers block and at the same indent level:

      #@overlay/match missing_ok=True
      hostAliases:
      - ip: PRIVATE-REGISTRY-IP
        hostnames:
        - PRIVATE-REGISTRY-HOSTNAME
      

      Where PRIVATE-REGISTRY-IP and PRIVATE-REGISTRY-HOSTNAME are the IP address and name of your private Docker registry, for example, custom-image-repository.io/yourproject.

Step 2: Generate the images-copy-list File

Note: In a physically airgapped environment, follow the procedure in Copy Images into an Airgapped Environment instead of this step and Step 3: Run the download-images Script below.

  1. Set environment variables for:

    • The repository from which to fetch Bill of Materials (BoM) YAML files
    • The IP address or FQDN of your offline private registry
    • The CA certificate for the private registry (if your private image registry uses a self-signed certificate)
    • The Tanzu Kubernetes Grid version tag

    For example:

    export TKG_IMAGE_REPO="projects.registry.vmware.com/tkg"
    export TKG_CUSTOM_IMAGE_REPOSITORY="PRIVATE-REGISTRY-IP/PRIVATE-REGISTRY-HOSTNAME"
    export TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE="LS0t[...]tLS0tLQ=="
    export TKG_BOM_IMAGE_TAG="v1.5.4"
    

    Where PRIVATE-REGISTRY-IP and PRIVATE-REGISTRY-HOSTNAME are the IP address and name of your private Docker registry.

  2. Download the script named gen-publish-images.sh.

    wget https://raw.githubusercontent.com/vmware-tanzu/tanzu-framework/d5b1dc1f3861729e28cb2cd1c744d4dc494522e6/hack/gen-publish-images.sh
    
  3. Make the gen-publish-images script executable.

    chmod +x gen-publish-images.sh
    
  4. Generate the image-copy-list file that lists images with the address of your private Docker registry.

    ./gen-publish-images.sh > image-copy-list
    
  5. Verify that the generated list contains the correct registry address.

    cat image-copy-list
    

Step 3: Run the download-images.sh Script

  1. Create the download-images.sh script.

    #!/bin/bash
    
    set -euo pipefail
    
    images_script=${1:-}
    if [ ! -f $images_script ]; then
      echo "You may add your images list filename as an argument."
      echo "E.g ./download-images.sh image-copy-list"
    fi
    
    commands="$(cat ${images_script} |grep imgpkg |sort |uniq)"
    
    while IFS= read -r cmd; do
      echo -e "\nrunning $cmd\n"
      until $cmd; do
         echo -e "\nDownload failed. Retrying....\n"
         sleep 1
      done
    done <<< "$commands"
    
  2. Make the download-images script executable.

    chmod +x download-images.sh
    
  3. Log in to your local private registry.

    docker login ${TKG_CUSTOM_IMAGE_REPOSITORY}
    
  4. Run the download-images script on the image-copy-list file to pull the required images from the public Tanzu Kubernetes Grid registry and push them to your private registry.

    ./download-images.sh image-copy-list
    

    If your registry lacks sufficient storage for all images listed in the images-copy-list, re-generate and re-run the script after either:

  5. When the script finishes, do the following, depending on your infrastructure:

    • vSphere: Turn off your Internet connection.

Step 4: Set Environment Variables

As long as the TKG_CUSTOM_IMAGE_REPOSITORY variable remains set, when you deploy clusters, Tanzu Kubernetes Grid will pull images from your local private registry rather than from the external public registry. To make sure that Tanzu Kubernetes Grid always pulls images from the local private registry, run tanzu config set to add TKG_CUSTOM_IMAGE_REPOSITORY to the global Tanzu CLI configuration file, ~/.config/tanzu/config.yaml.

tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY custom-image-repository.io/yourproject
tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY true

If your Docker registry uses self-signed certificates, also add TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE to the global Tanzu CLI configuration file. Provide the CA certificate in base64-encoded format by executing base64 -w 0 your-ca.crt. If you specify TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE, set TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY to false.

tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY custom-image-repository.io/yourproject
tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY false
tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE LS0t[...]tLS0tLQ==

(Amazon EC2) Set Management Cluster’s Load Balancer to Use Internal Scheme

To customize your AWS management cluster’s load balancer to use an internal scheme, which prevents its Kubernetes API server from being accessed and routed over the internet, set AWS_LOAD_BALANCER_SCHEME_INTERNAL to true in your environment:

tanzu config set env.AWS_LOAD_BALANCER_SCHEME_INTERNAL true

The Tanzu CLI persists these env. environment variables until you unset them by running tanzu config unset.

To temporarily override the value of a variable set by running tanzu config set, you can set the variable as a local environment variable, by running export (on Linux and macOS) or SET (on Windows) on the command line.

Step 5: Initialize Tanzu Kubernetes Grid

  1. If your offline bootstrap machine does not have a ~/.config/tanzu or ~/.config/tanzu/bom directory, run tanzu config init to create the directories. The tanzu CLI saves the BoM files and provider files from the image repository into the ~/.config/tanzu/tkg/bom and ~/.config/tanzu/tkg/providers directories, respectively.

  2. If your airgapped environment has a DNS server, check that it includes an entry for your private Docker registry.
    If your environment lacks a DNS server, modify overlay files as follows to add the registry into the /etc/hosts files of the TKr Controller and all control plane and worker nodes:

    • Add the following to the ytt overlay file for your infrastructure, ~/.config/tanzu/tkg/providers/infrastructure-IAAS/ytt/IAAS-overlay.yaml where IAAS is vsphere, aws, or azure.

      #@ load("@ytt:overlay", "overlay")
      
      #@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
      ---
      apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
      kind: KubeadmControlPlane
      spec:
        kubeadmConfigSpec:
          preKubeadmCommands:
          #! Add nameserver to all k8s nodes
          #@overlay/append
          - echo "PRIVATE-REGISTRY-IP   PRIVATE-REGISTRY-HOSTNAME" >> /etc/hosts
      
      #@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
      ---
      apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
      kind: KubeadmConfigTemplate
      spec:
        template:
          spec:
            preKubeadmCommands:
            #! Add nameserver to all k8s nodes
            #@overlay/append
            - echo "PRIVATE-REGISTRY-IP   PRIVATE-REGISTRY-HOSTNAME" >> /etc/hosts
      

      Where PRIVATE-REGISTRY-IP and PRIVATE-REGISTRY-HOSTNAME are the IP address and name of your private Docker registry.

    • In your TKr Controller customization overlay file, ~/.config/tanzu/tkg/providers/ytt/03_customizations/01_tkr/tkr_overlay.lib.yaml add the following into the spec.template.spec section, before the containers block and at the same indent level:

            #@overlay/match missing_ok=True
            hostAliases:
            - ip: PRIVATE-REGISTRY-IP
              hostnames:
              - PRIVATE-REGISTRY-HOSTNAME
      

      Where PRIVATE-REGISTRY-IP and PRIVATE-REGISTRY-HOSTNAME are the IP address and name of your private Docker registry.

What to Do Next

Your Internet-restricted environment is now ready for you to deploy or upgrade Tanzu Kubernetes Grid management clusters and start deploying Tanzu Kubernetes clusters on vSphere or Amazon EC2.

You can also optionally deploy the Tanzu Kubernetes Grid packages and use the Harbor shared service instead of your private Docker registry.

Using the Harbor Shared Service in Internet-Restricted Environments

In Internet-restricted environments, you can set up Harbor as a shared service so that your Tanzu Kubernetes Grid instance uses it instead of an external registry. As described in the procedures above, to deploy Tanzu Kubernetes Grid in an Internet-restricted environment, you must have a private container registry running in your environment before you can deploy a management cluster. This private registry is a central registry that is part of your infrastructure and available to your whole environment, but is not necessarily based on Harbor or supported by VMware. This private registry is not a Tanzu Kubernetes Grid shared service; you deploy that registry later.

After you use this central registry to deploy a management cluster in an Internet-restricted environment, you configure Tanzu Kubernetes Grid so that Tanzu Kubernetes clusters pull images from the central registry rather than from the external Internet. If the central registry uses a trusted CA certificate, connections between Tanzu Kubernetes clusters and the registry are secure. If your central registry uses self-signed certificates, you can disable TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY and specify the TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE option. Setting this option automatically injects your self-signed certificates into your Tanzu Kubernetes clusters.

In either case, after you use your central registry to deploy a management cluster in an Internet-restricted environment, VMware recommends deploying the Harbor shared service in your Tanzu Kubernetes Grid instance and then configuring Tanzu Kubernetes Grid so that Tanzu Kubernetes clusters pull images from the Harbor shared service managed by Tanzu Kubernetes Grid, rather than from the central registry.

On infrastructures with load balancing, VMware recommends installing the External DNS service alongside the Harbor service, as described in Harbor Registry and External DNS.

For information on how to deploy Harbor, see Deploy Harbor into a Workload or a Shared Services Cluster.

check-circle-line exclamation-circle-line close-line
Scroll to top icon