check-circle-line exclamation-circle-line close-line

<

This topic describes how to set up your environment so that you can deploy Tanzu Kubernetes Grid management clusters and Tanzu Kubernetes clusters in Internet-restricted environments, namely environments that are not connected to the Internet. The procedures described here only apply to deployments to vSphere.

If you are using Tanzu Kubernetes Grid to deploy clusters in a connected environment that can pull images over an external internet connection, you do not need to perform this procedure.

Prerequisites

Before you can deploy management clusters and Tanzu Kubernetes clusters in an Internet-restricted environment, you must perform the following actions.

  • Within your firewall, install and configure a private Docker Registry. For example, install Harbor, which is the registry against which this procedure has been tested. For information about how to install Harbor, see Harbor Installation and Configuration.
  • Obtain a valid SSL certificate for the Docker Registry, signed by a trusted CA. For information about how to obtain the Harbor registry certificate, see the Harbor documentation.
  • Make sure that the internet-connected machine has Docker installed and running.
  • Make sure that you can connect to the private registry from the internet-connected machine.
  • Obtain a system with an external internet connection, and follow the instructions in Download and Install the Tanzu Kubernetes Grid CLI to download, unpack, and install the Tanzu Kubernetes Grid CLI binary on your internet-connected system.
  • Follow the instructions in Prepare to Deploy Management Clusters to vSphere to create SSH keys and to import into vSphere the OVAs from which node and loadbalancer VMs are created.

The next steps depend on whether you are using Tanzu Kubernetes Grid 1.1.0 or 1.1.2 and later.

Tanzu Kubernetes Grid 1.1.2 and Later

The procedure to set up an internet-restricted environment so that you can deploy management clusters and Tanzu Kubernetes clusters has been simplified in Tanzu Kubernetes Grid 1.1.2 and subsequent releases.

  1. On the machine with an internet connection on which you have performed the initial setup tasks and installed the Tanzu Kubernetes Grid CLI, install yq 2.x.

    NOTE: You must use yq version 2.x. Version 3.x does not work with this script.

  2. Run the tkg get management-cluster command.

    Running a tkg command for the first time installs the necessary Tanzu Kubernetes Grid configuration files in the ~/.tkg folder on your system. The script that you create and run in subsequent steps requires the files in the ~/.tkg/bom folder to be present on your machine.

  3. Set the IP address or FQDN of your local registry as an environment variable.

    In the following command example, replace custom-image-repository.io with the address of your private Docker registry.

    export TKG_CUSTOM_IMAGE_REPOSITORY="custom-image-repository.io"
    
  4. Copy and paste the following script in a text editor, and save it as gen-publish-images.sh.

    #!/usr/bin/env bash
    # Copyright 2020 The TKG Contributors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    BOM_DIR=${HOME}/.tkg/bom
    
    if [ -z "$TKG_CUSTOM_IMAGE_REPOSITORY" ]; then
        echo "TKG_CUSTOM_IMAGE_REPOSITORY variable is not defined"
        exit 1
    fi
    
    for TKG_BOM_FILE in "$BOM_DIR"/*.yaml; do
    
        # Get actual image repository from BoM file
        actualImageRepository=$(yq .imageConfig.imageRepository "$TKG_BOM_FILE" | tr -d '"')
    
        # Iterate through BoM file to create the complete Image name
        # and then pull, retag and push image to custom registry
        yq .images "$TKG_BOM_FILE" | jq -c '.[]' | while read -r i; do
    
            # Get imagePath and imageTag
            imagePath=$(jq .imagePath <<<"$i" | tr -d '"')
            imageTag=$(jq .tag <<<"$i" | tr -d '"')
    
            # create complete image names
            actualImage=$actualImageRepository/$imagePath:$imageTag
            customImage=$TKG_CUSTOM_IMAGE_REPOSITORY/$imagePath:$imageTag
    
            echo "docker pull $actualImage"
            echo "docker tag $actualImage $customImage"
            echo "docker push $customImage"
            echo ""
        done
    
    done    
    
  5. Make the script executable.

    chmod +x gen-publish-images.sh
    
  6. Generate a new version of the script that is populated with the address of your private Docker registry.

    ./gen-publish-images.sh > publish-images.sh
    
  7. Verify that the generated version of the script contains the correct registry address.

    cat publish-images.sh
    
  8. Make the script executable.

    chmod +x publish-images.sh
    
  9. Log in to your local private registry.
    docker login ${TKG_CUSTOM_IMAGE_REPOSITORY}
    
  10. Run the script to pull the required images from the public Tanzu Kubernetes Grid registry, retag them, and push them to your private registry.

    ./publish-images.sh
    
  11. When the script finishes, turn off your internet connection.
  12. Run any Tanzu Kubernetes Grid CLI command, for example tkg init --ui.

    The Tanzu Kubernetes Grid installer interface should open.

Your Internet-restricted environment is now ready for you to deploy Tanzu Kubernetes Grid management clusters and Tanzu Kubernetes clusters to vSphere. As long as the TKG_CUSTOM_IMAGE_REPOSITORY variable remains set, when you deploy clusters, Tanzu Kubernetes Grid will pull images from your local private registry rather than from the external public registry.

Tanzu Kubernetes Grid 1.1.0

The procedure to set up an internet-restricted environment so that you can deploy management clusters and Tanzu Kubernetes clusters is manual in Tanzu Kubernetes Grid 1.1.0. For new deployments to internet-restricted environments, it is strongly recommended to use at least Tanzu Kubernetes Grid 1.1.2.

  1. On the machine with an internet connection on which you have performed the initial setup tasks and installed the Tanzu Kubernetes Grid CLI, pull the following images into your local Docker image store.

    Copy and run the following command without changing it.

    xargs -n1 docker pull << EOF
    registry.tkg.vmware.run/kind/node:v1.18.2_vmware.1
    registry.tkg.vmware.run/calico-all/cni-plugin:v3.11.2_vmware.1
    registry.tkg.vmware.run/calico-all/kube-controllers:v3.11.2_vmware.1
    registry.tkg.vmware.run/calico-all/node:v3.11.2_vmware.1
    registry.tkg.vmware.run/calico-all/pod2daemon:v3.11.2_vmware.1
    registry.tkg.vmware.run/ccm/manager:v1.1.0_vmware.2
    registry.tkg.vmware.run/cluster-api/cluster-api-aws-controller:v0.5.3_vmware.1
    registry.tkg.vmware.run/cluster-api/cluster-api-controller:v0.3.5_vmware.1
    registry.tkg.vmware.run/cluster-api/cluster-api-vsphere-controller:v0.6.4_vmware.1
    registry.tkg.vmware.run/cluster-api/kube-rbac-proxy:v0.4.1_vmware.2
    registry.tkg.vmware.run/cluster-api/kubeadm-bootstrap-controller:v0.3.5_vmware.1
    registry.tkg.vmware.run/cluster-api/kubeadm-control-plane-controller:v0.3.5_vmware.1
    registry.tkg.vmware.run/csi/csi-attacher:v1.1.1_vmware.7
    registry.tkg.vmware.run/csi/csi-livenessprobe:v1.1.0_vmware.7
    registry.tkg.vmware.run/csi/csi-node-driver-registrar:v1.1.0_vmware.7
    registry.tkg.vmware.run/csi/csi-provisioner:v1.4.0_vmware.2
    registry.tkg.vmware.run/csi/volume-metadata-syncer:v1.0.2_vmware.1
    registry.tkg.vmware.run/csi/vsphere-block-csi-driver:v1.0.2_vmware.1
    registry.tkg.vmware.run/cert-manager/cert-manager-controller:v0.11.0_vmware.1
    registry.tkg.vmware.run/cert-manager/cert-manager-cainjector:v0.11.0_vmware.1
    registry.tkg.vmware.run/cert-manager/cert-manager-webhook:v0.11.0_vmware.1
    registry.tkg.vmware.run/coredns:v1.6.7_vmware.1
    registry.tkg.vmware.run/etcd:v3.4.3_vmware.5
    registry.tkg.vmware.run/kube-apiserver:v1.18.2_vmware.1
    registry.tkg.vmware.run/kube-controller-manager:v1.18.2_vmware.1
    registry.tkg.vmware.run/kube-proxy:v1.18.2_vmware.1
    registry.tkg.vmware.run/kube-scheduler:v1.18.2_vmware.1
    registry.tkg.vmware.run/pause:3.2
    EOF
    
  2. Set the IP address or FQDN of your local registry as an environment variable.

    For example, replace <local-registry-address> with my.harbor.example.com.

    LOCAL_REGISTRY=<local-registry-address>
    
  3. Log in to your local private registry.
    docker login ${LOCAL_REGISTRY}
    
  4. Tag all of the images in your image store so that you can push them to the local registry.

    Copy and run the following command without changing it.

    xargs -n2 docker tag << EOF
    registry.tkg.vmware.run/kind/node:v1.18.2_vmware.1 ${LOCAL_REGISTRY}/kind/node:v1.18.2_vmware.1
    registry.tkg.vmware.run/calico-all/cni-plugin:v3.11.2_vmware.1 ${LOCAL_REGISTRY}/calico-all/cni-plugin:v3.11.2_vmware.1
    registry.tkg.vmware.run/calico-all/kube-controllers:v3.11.2_vmware.1 ${LOCAL_REGISTRY}/calico-all/kube-controllers:v3.11.2_vmware.1
    registry.tkg.vmware.run/calico-all/node:v3.11.2_vmware.1 ${LOCAL_REGISTRY}/calico-all/node:v3.11.2_vmware.1
    registry.tkg.vmware.run/calico-all/pod2daemon:v3.11.2_vmware.1 ${LOCAL_REGISTRY}/calico-all/pod2daemon:v3.11.2_vmware.1
    registry.tkg.vmware.run/ccm/manager:v1.1.0_vmware.2 ${LOCAL_REGISTRY}/ccm/manager:v1.1.0_vmware.2
    registry.tkg.vmware.run/cluster-api/cluster-api-aws-controller:v0.5.3_vmware.1  ${LOCAL_REGISTRY}/cluster-api/cluster-api-aws-controller:v0.5.3_vmware.1
    registry.tkg.vmware.run/cluster-api/cluster-api-controller:v0.3.5_vmware.1 ${LOCAL_REGISTRY}/cluster-api/cluster-api-controller:v0.3.5_vmware.1
    registry.tkg.vmware.run/cluster-api/cluster-api-vsphere-controller:v0.6.4_vmware.1 ${LOCAL_REGISTRY}/cluster-api/cluster-api-vsphere-controller:v0.6.4_vmware.1
    registry.tkg.vmware.run/cluster-api/kube-rbac-proxy:v0.4.1_vmware.2 ${LOCAL_REGISTRY}/cluster-api/kube-rbac-proxy:v0.4.1_vmware.2
    registry.tkg.vmware.run/cluster-api/kubeadm-bootstrap-controller:v0.3.5_vmware.1 ${LOCAL_REGISTRY}/cluster-api/kubeadm-bootstrap-controller:v0.3.5_vmware.1
    registry.tkg.vmware.run/cluster-api/kubeadm-control-plane-controller:v0.3.5_vmware.1 ${LOCAL_REGISTRY}/cluster-api/kubeadm-control-plane-controller:v0.3.5_vmware.1
    registry.tkg.vmware.run/csi/csi-attacher:v1.1.1_vmware.7 ${LOCAL_REGISTRY}/csi/csi-attacher:v1.1.1_vmware.7
    registry.tkg.vmware.run/csi/csi-livenessprobe:v1.1.0_vmware.7 ${LOCAL_REGISTRY}/csi/csi-livenessprobe:v1.1.0_vmware.7
    registry.tkg.vmware.run/csi/csi-node-driver-registrar:v1.1.0_vmware.7 ${LOCAL_REGISTRY}/csi/csi-node-driver-registrar:v1.1.0_vmware.7
    registry.tkg.vmware.run/csi/csi-provisioner:v1.4.0_vmware.2 ${LOCAL_REGISTRY}/csi/csi-provisioner:v1.4.0_vmware.2
    registry.tkg.vmware.run/csi/volume-metadata-syncer:v1.0.2_vmware.1 ${LOCAL_REGISTRY}/csi/volume-metadata-syncer:v1.0.2_vmware.1
    registry.tkg.vmware.run/csi/vsphere-block-csi-driver:v1.0.2_vmware.1 ${LOCAL_REGISTRY}/csi/vsphere-block-csi-driver:v1.0.2_vmware.1
    registry.tkg.vmware.run/cert-manager/cert-manager-controller:v0.11.0_vmware.1 ${LOCAL_REGISTRY}/cert-manager/cert-manager-controller:v0.11.0_vmware.1
    registry.tkg.vmware.run/cert-manager/cert-manager-cainjector:v0.11.0_vmware.1 ${LOCAL_REGISTRY}/cert-manager/cert-manager-cainjector:v0.11.0_vmware.1 
    registry.tkg.vmware.run/cert-manager/cert-manager-webhook:v0.11.0_vmware.1 ${LOCAL_REGISTRY}/cert-manager/cert-manager-webhook:v0.11.0_vmware.1
    registry.tkg.vmware.run/coredns:v1.6.7_vmware.1 ${LOCAL_REGISTRY}/coredns:v1.6.7_vmware.1
    registry.tkg.vmware.run/etcd:v3.4.3_vmware.5 ${LOCAL_REGISTRY}/etcd:v3.4.3_vmware.5
    registry.tkg.vmware.run/kube-apiserver:v1.18.2_vmware.1 ${LOCAL_REGISTRY}/kube-apiserver:v1.18.2_vmware.1
    registry.tkg.vmware.run/kube-controller-manager:v1.18.2_vmware.1 ${LOCAL_REGISTRY}/kube-controller-manager:v1.18.2_vmware.1
    registry.tkg.vmware.run/kube-proxy:v1.18.2_vmware.1 ${LOCAL_REGISTRY}/kube-proxy:v1.18.2_vmware.1
    registry.tkg.vmware.run/kube-scheduler:v1.18.2_vmware.1 ${LOCAL_REGISTRY}/kube-scheduler:v1.18.2_vmware.1
    registry.tkg.vmware.run/pause:3.2 ${LOCAL_REGISTRY}/pause:3.2    
    EOF
    
    
  5. Push all of the images from your image store into the local registry.

    Copy and run the following command without changing it.

    xargs -n1 docker push << EOF
    ${LOCAL_REGISTRY}/kind/node:v1.18.2_vmware.1 
    ${LOCAL_REGISTRY}/calico-all/cni-plugin:v3.11.2_vmware.1 
    ${LOCAL_REGISTRY}/calico-all/kube-controllers:v3.11.2_vmware.1 
    ${LOCAL_REGISTRY}/calico-all/node:v3.11.2_vmware.1 
    ${LOCAL_REGISTRY}/calico-all/pod2daemon:v3.11.2_vmware.1 
    ${LOCAL_REGISTRY}/ccm/manager:v1.1.0_vmware.2 
    ${LOCAL_REGISTRY}/cluster-api/cluster-api-aws-controller:v0.5.3_vmware.1 
    ${LOCAL_REGISTRY}/cluster-api/cluster-api-controller:v0.3.5_vmware.1 
    ${LOCAL_REGISTRY}/cluster-api/cluster-api-vsphere-controller:v0.6.4_vmware.1 
    ${LOCAL_REGISTRY}/cluster-api/kube-rbac-proxy:v0.4.1_vmware.2 
    ${LOCAL_REGISTRY}/cluster-api/kubeadm-bootstrap-controller:v0.3.5_vmware.1 
    ${LOCAL_REGISTRY}/cluster-api/kubeadm-control-plane-controller:v0.3.5_vmware.1 
    ${LOCAL_REGISTRY}/csi/csi-attacher:v1.1.1_vmware.7 
    ${LOCAL_REGISTRY}/csi/csi-livenessprobe:v1.1.0_vmware.7 
    ${LOCAL_REGISTRY}/csi/csi-node-driver-registrar:v1.1.0_vmware.7 
    ${LOCAL_REGISTRY}/csi/csi-provisioner:v1.4.0_vmware.2 
    ${LOCAL_REGISTRY}/csi/volume-metadata-syncer:v1.0.2_vmware.1 
    ${LOCAL_REGISTRY}/csi/vsphere-block-csi-driver:v1.0.2_vmware.1 
    ${LOCAL_REGISTRY}/cert-manager/cert-manager-controller:v0.11.0_vmware.1 
    ${LOCAL_REGISTRY}/cert-manager/cert-manager-cainjector:v0.11.0_vmware.1 
    ${LOCAL_REGISTRY}/cert-manager/cert-manager-webhook:v0.11.0_vmware.1 
    ${LOCAL_REGISTRY}/coredns:v1.6.7_vmware.1
    ${LOCAL_REGISTRY}/etcd:v3.4.3_vmware.5
    ${LOCAL_REGISTRY}/kube-apiserver:v1.18.2_vmware.1
    ${LOCAL_REGISTRY}/kube-controller-manager:v1.18.2_vmware.1
    ${LOCAL_REGISTRY}/kube-proxy:v1.18.2_vmware.1
    ${LOCAL_REGISTRY}/kube-scheduler:v1.18.2_vmware.1
    ${LOCAL_REGISTRY}/pause:3.2
    EOF
    
    
  6. Run tkg get management-cluster to populate your local ~/.tkg folder.
  7. Use the search and replace utility of your choice to replace registry.tkg.vmware.run with <local-registry-address> recursively throughout the ~/.tkg folder.

    Make sure that the search and replace operation includes the ~/.tkg/providers folder and the ~/.tkg/config.yaml file.

  8. Turn off your internet connection.
  9. Run any Tanzu Kubernetes Grid CLI command, for example tkg init --ui.

    The Tanzu Kubernetes Grid installer interface should open.

What to Do Next

Your Internet-restricted environment is now ready for you to deploy Tanzu Kubernetes Grid management clusters to vSphere.