You can deploy Tanzu Kubernetes Grid management clusters and Tanzu Kubernetes (workload) clusters in environments that are not connected to the Internet, such as:
This topic explains how to deploy management clusters to internet-restricted environments on vSphere or AWS.
You do not need to perform these procedures if you are using Tanzu Kubernetes Grid in a connected environment that can pull images over an external Internet connection.
Before you can deploy management clusters and Tanzu Kubernetes clusters in an Internet-restricted environment, you must have:
imgpkg
installed. (Not required if you plan on using the Harbor registry.)yq
installed that is equal to or above v4.9.2.TKG_*_PROXY
variables in the cluster configuration file to the proxy server’s address, and set TKG_PROXY_CA_CERT
to the proxy server’s CA if its certificate is self-signed. See Configure Proxies.An internet-restricted Tanzu Kubernetes Grid installation on vSphere has firewalls and communication between major components as shown here.
On vSphere, in addition to the general prerequisites above, you must:
Upload to vSphere the OVAs from which node VMs are created. See Import the Base Image Template into vSphere in Deploy Management Clusters to vSphere.
After the VM is created, if you cannot log in with the default username/password, reset the password using Gnu GRUB, as described in Resetting a Lost Root Password if it is Photon OS.
Log in to the jumpbox as root, and enable remote ssh as follows:
PermitRootLogin yes
. In this case the line exist, remove the “#”.service sshd restart
Install and configure a private Docker-compatible container registry such as Harbor, Docker, or Artifactory as follows. This registry runs outside of Tanzu Kubernetes Grid and is separate from any registry deployed as a shared service for clusters:
Configure an offline subnet to use as the internet-restricted environment, and associate it with the jumpbox.
Set up the DHCP server to allocate private IP’s to the new instance.
Create a vSphere distributed switch on a data center to handle the networking configuration of multiple hosts at a time from a central place.
A proxied Tanzu Kubernetes Grid installation on Amazon EC2 has firewalls and communication between major components as shown here. Security Groups (SG) are automatically created between the control plane and workload domains, and between the workload components and control plane components.
For a proxied installation on Amazon EC2, in addition to the general prerequisites above, you also need:
After you create the offline VPC, you must add following endpoints to it (VPC endpoint enables private connections between your VPC and supported AWS services):
sts
ssm
ec2
ec2messages
elasticloadbalancing
secretsmanager
ssmmessages
To add the service endpoints to your VPC:
The following procedures apply both for the initial deployment of Tanzu Kubernetes Grid in an internet-restricted environment and to upgrading an existing internet-restricted Tanzu Kubernetes Grid deployment.
On the machine with an Internet connection on which you installed the Tanzu CLI, run the tanzu init
and tanzu mc create
commands.
mc
is short for management-cluster
.tanzu mc create
command does not need to complete.Running tanzu init
and tanzu mc create
for the first time installs the necessary Tanzu Kubernetes Grid configuration files in the ~/.config/tanzu/tkg
folder on your system. The script that you create and run in subsequent steps requires the Bill of Materials (BoM) YAML files in the ~/.config/tanzu/tkg/bom
folder to be present on your machine. The scripts in this procedure use the BoM files to identify the correct versions of the different Tanzu Kubernetes Grid component images to pull.
Set the IP address or FQDN of your local registry as an environment variable:
export TKG_CUSTOM_IMAGE_REPOSITORY="PRIVATE-REGISTRY"
Where PRIVATE-REGISTRY
is the IP address or FQDN of your private registry and the name of the project. For example, custom-image-repository.io/yourproject.
On Windows platforms, use the SET
command instead of export
.
Set the repository from which to fetch Bill of Materials (BoM) YAML files.
export TKG_IMAGE_REPO="projects.registry.vmware.com/tkg"
If your private Docker registry uses a self-signed certificate, provide the CA certificate in base64 encoded format e.g. base64 -w 0 your-ca.crt
.
export TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE=LS0t[...]tLS0tLQ==
If you specify the CA certificate in this option, it is automatically injected into all Tanzu Kubernetes clusters that you create in this Tanzu Kubernetes Grid instance.
On Windows platforms, use the SET
command instead of export
.
If your airgapped environment has a DNS server, check that it includes an entry for your private Docker registry.
If your environment lacks a DNS server, modify overlay files as follows to add the registry into the /etc/hosts
files of the TKr Controller and all control plane and worker nodes:
Add the following to the ytt
overlay file for your infrastructure, ~/.config/tanzu/tkg/providers/infrastructure-IAAS/ytt/IAAS-overlay.yaml
where IAAS
is vsphere
, aws
, or azure
.
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
spec:
kubeadmConfigSpec:
preKubeadmCommands:
#! Add nameserver to all k8s nodes
#@overlay/append
- echo "PRIVATE-REGISTRY-IP PRIVATE-REGISTRY-HOSTNAME" >> /etc/hosts
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
spec:
template:
spec:
preKubeadmCommands:
#! Add nameserver to all k8s nodes
#@overlay/append
- echo "PRIVATE-REGISTRY-IP PRIVATE-REGISTRY-HOSTNAME" >> /etc/hosts
Where PRIVATE-REGISTRY-IP
and PRIVATE-REGISTRY-HOSTNAME
are the IP address and name of your private Docker registry.
In your TKr Controller customization overlay file, ~/.config/tanzu/tkg/providers/ytt/03_customizations/01_tkr/tkr_overlay.lib.yaml
add the following into the spec.template.spec
section, before the containers
block and at the same indent level:
#@overlay/match missing_ok=True
hostAliases:
- ip: PRIVATE-REGISTRY-IP
hostnames:
- PRIVATE-REGISTRY-HOSTNAME
Where PRIVATE-REGISTRY-IP
and PRIVATE-REGISTRY-HOSTNAME
are the IP address and name of your private Docker registry, for example, custom-image-repository.io/yourproject
.
images-copy-list
FileNote: In a physically airgapped environment, follow the procedure in Copy Images into an Airgapped Environment instead of this step and Step 3: Run the download-images
Script below.
Set environment variables for:
For example:
export TKG_IMAGE_REPO="projects.registry.vmware.com/tkg"
export TKG_CUSTOM_IMAGE_REPOSITORY="PRIVATE-REGISTRY-IP/PRIVATE-REGISTRY-HOSTNAME"
export TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE="LS0t[...]tLS0tLQ=="
export TKG_BOM_IMAGE_TAG="v1.5.4"
Where PRIVATE-REGISTRY-IP
and PRIVATE-REGISTRY-HOSTNAME
are the IP address and name of your private Docker registry.
Download the script named gen-publish-images.sh
.
wget https://raw.githubusercontent.com/vmware-tanzu/tanzu-framework/d5b1dc1f3861729e28cb2cd1c744d4dc494522e6/hack/gen-publish-images.sh
Make the gen-publish-images
script executable.
chmod +x gen-publish-images.sh
Generate the image-copy-list
file that lists images with the address of your private Docker registry.
./gen-publish-images.sh > image-copy-list
Verify that the generated list contains the correct registry address.
cat image-copy-list
download-images.sh
ScriptCreate the download-images.sh
script.
#!/bin/bash
set -euo pipefail
images_script=${1:-}
if [ ! -f $images_script ]; then
echo "You may add your images list filename as an argument."
echo "E.g ./download-images.sh image-copy-list"
fi
commands="$(cat ${images_script} |grep imgpkg |sort |uniq)"
while IFS= read -r cmd; do
echo -e "\nrunning $cmd\n"
until $cmd; do
echo -e "\nDownload failed. Retrying....\n"
sleep 1
done
done <<< "$commands"
Make the download-images
script executable.
chmod +x download-images.sh
Log in to your local private registry.
docker login ${TKG_CUSTOM_IMAGE_REPOSITORY}
Run the download-images
script on the image-copy-list
file to pull the required images from the public Tanzu Kubernetes Grid registry and push them to your private registry.
./download-images.sh image-copy-list
If your registry lacks sufficient storage for all images listed in the images-copy-list
, re-generate and re-run the script after either:
persistentVolumeClaim.registry.size
value in your Harbor package configuration. See Deploy Harbor into a Workload or a Shared Services Cluster.~/.config/tanzu/tkg/bom
directory, as described in Step 1: Prepare Environment.When the script finishes, do the following, depending on your infrastructure:
As long as the TKG_CUSTOM_IMAGE_REPOSITORY
variable remains set, when you deploy clusters, Tanzu Kubernetes Grid will pull images from your local private registry rather than from the external public registry. To make sure that Tanzu Kubernetes Grid always pulls images from the local private registry, run tanzu config set
to add TKG_CUSTOM_IMAGE_REPOSITORY
to the global Tanzu CLI configuration file, ~/.config/tanzu/config.yaml
.
tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY custom-image-repository.io/yourproject
tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY true
If your Docker registry uses self-signed certificates, also add TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE
to the global Tanzu CLI configuration file. Provide the CA certificate in base64-encoded format by executing base64 -w 0 your-ca.crt
. If you specify TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE
, set TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY
to false
.
tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY custom-image-repository.io/yourproject
tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY false
tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE LS0t[...]tLS0tLQ==
To customize your AWS management cluster’s load balancer to use an internal scheme, which prevents its Kubernetes API server from being accessed and routed over the internet, set AWS_LOAD_BALANCER_SCHEME_INTERNAL
to true
in your environment:
tanzu config set env.AWS_LOAD_BALANCER_SCHEME_INTERNAL true
The Tanzu CLI persists these env.
environment variables until you unset them by running tanzu config unset
.
To temporarily override the value of a variable set by running tanzu config set
, you can set the variable as a local environment variable, by running export
(on Linux and macOS) or SET
(on Windows) on the command line.
If your offline bootstrap machine does not have a ~/.config/tanzu
or ~/.config/tanzu/bom
directory, run tanzu config init
to create the directories. The tanzu
CLI saves the BoM files and provider files from the image repository into the ~/.config/tanzu/tkg/bom
and ~/.config/tanzu/tkg/providers
directories, respectively.
If your airgapped environment has a DNS server, check that it includes an entry for your private Docker registry.
If your environment lacks a DNS server, modify overlay files as follows to add the registry into the /etc/hosts
files of the TKr Controller and all control plane and worker nodes:
Add the following to the ytt
overlay file for your infrastructure, ~/.config/tanzu/tkg/providers/infrastructure-IAAS/ytt/IAAS-overlay.yaml
where IAAS
is vsphere
, aws
, or azure
.
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
spec:
kubeadmConfigSpec:
preKubeadmCommands:
#! Add nameserver to all k8s nodes
#@overlay/append
- echo "PRIVATE-REGISTRY-IP PRIVATE-REGISTRY-HOSTNAME" >> /etc/hosts
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
spec:
template:
spec:
preKubeadmCommands:
#! Add nameserver to all k8s nodes
#@overlay/append
- echo "PRIVATE-REGISTRY-IP PRIVATE-REGISTRY-HOSTNAME" >> /etc/hosts
Where PRIVATE-REGISTRY-IP
and PRIVATE-REGISTRY-HOSTNAME
are the IP address and name of your private Docker registry.
In your TKr Controller customization overlay file, ~/.config/tanzu/tkg/providers/ytt/03_customizations/01_tkr/tkr_overlay.lib.yaml
add the following into the spec.template.spec
section, before the containers
block and at the same indent level:
#@overlay/match missing_ok=True
hostAliases:
- ip: PRIVATE-REGISTRY-IP
hostnames:
- PRIVATE-REGISTRY-HOSTNAME
Where PRIVATE-REGISTRY-IP
and PRIVATE-REGISTRY-HOSTNAME
are the IP address and name of your private Docker registry.
Your Internet-restricted environment is now ready for you to deploy or upgrade Tanzu Kubernetes Grid management clusters and start deploying Tanzu Kubernetes clusters on vSphere or Amazon EC2.
You can also optionally deploy the Tanzu Kubernetes Grid packages and use the Harbor shared service instead of your private Docker registry.
In Internet-restricted environments, you can set up Harbor as a shared service so that your Tanzu Kubernetes Grid instance uses it instead of an external registry. As described in the procedures above, to deploy Tanzu Kubernetes Grid in an Internet-restricted environment, you must have a private container registry running in your environment before you can deploy a management cluster. This private registry is a central registry that is part of your infrastructure and available to your whole environment, but is not necessarily based on Harbor or supported by VMware. This private registry is not a Tanzu Kubernetes Grid shared service; you deploy that registry later.
After you use this central registry to deploy a management cluster in an Internet-restricted environment, you configure Tanzu Kubernetes Grid so that Tanzu Kubernetes clusters pull images from the central registry rather than from the external Internet. If the central registry uses a trusted CA certificate, connections between Tanzu Kubernetes clusters and the registry are secure. If your central registry uses self-signed certificates, you can disable TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY
and specify the TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE
option. Setting this option automatically injects your self-signed certificates into your Tanzu Kubernetes clusters.
In either case, after you use your central registry to deploy a management cluster in an Internet-restricted environment, VMware recommends deploying the Harbor shared service in your Tanzu Kubernetes Grid instance and then configuring Tanzu Kubernetes Grid so that Tanzu Kubernetes clusters pull images from the Harbor shared service managed by Tanzu Kubernetes Grid, rather than from the central registry.
On infrastructures with load balancing, VMware recommends installing the External DNS service alongside the Harbor service, as described in Harbor Registry and External DNS.
For information on how to deploy Harbor, see Deploy Harbor into a Workload or a Shared Services Cluster.