Build Machine Images

You can build custom Linux machine images for Tanzu Kubernetes Grid to use as a VM template for the management and Tanzu Kubernetes (workload) cluster nodes that it creates. Each custom machine image packages a base operating system (OS) version and a Kubernetes version, along with any additional customizations, into an image that runs on vSphere, Amazon EC2, or Microsoft Azure infrastructure. A custom image must be based on the OS versions that are supported by Tanzu Kubernetes Grid. The base OS can be an OS that VMware supports but does not distribute, for example, Red Hat Enterprise Linux (RHEL) v7. To view the list of supported OSes, see Target Operating Systems.

This topic provides background on custom images for Tanzu Kubernetes Grid, and explains how to build them.

Note: To use a custom machine image for management cluster nodes, you need to deploy the management cluster with the installer interface, not from a configuration file.

Overview: Kubernetes Image Builder

To build custom machine images for Tanzu Kubernetes Grid cluster nodes, you use the container image from the upstream Kubernetes Image Builder project. Kubernetes Image Builder runs on your local workstation and uses the following:

  • Ansible standardizes the process of configuring and provisioning machines across multiple target distribution families, such as Ubuntu and CentOS.
  • Packer automates and standardizes the image-building process for current and future CAPI providers, and packages the images for their target infrastructure once they are built.
  • Image Builder builds the images using native infrastructure for each provider:
    • Amazon EC2
      • Image Builder builds custom images from base AMIs that are published on Amazon EC2, such as official Ubuntu AMIs.
      • The custom image is built inside AWS and then stored in your AWS account in one or more regions.
      • See Building Images for AWS in the Image Builder documentation.
    • Azure:
      • You can store your custom image in an Azure Shared Image Gallery.
      • See Building Images for Azure in the Image Builder documentation.
    • vSphere:
      • Image Builder builds Open Virtualization Archive (OVA) images from the Linux distribution’s original installation ISO.
      • You import the resulting OVA into a vSphere cluster, take a snapshot for fast cloning, and then mark the machine image as a vm template.
      • See Building Images for vSphere in the Image Builder documentation.

Custom Images Replace Default Images

For common combinations of OS version, Kubernetes version, and target infrastructure, Tanzu Kubernetes Grid provides default machine images. For example, one ova-ubuntu-2004-v1.21.8+vmware.1-tkg image serves as the OVA image for Ubuntu v20.04 and Kubernetes v1.21.8 on vSphere.

For other combinations of OS version, Kubernetes version, and infrastructure, such as with the RHEL v7 OS, there are no default machine images, but you can build them.

If you build and use a custom image with the same OS version, Kubernetes version, and infrastructure that a default image already has, your custom image replaces the default. The Tanzu CLI then creates new clusters using your custom image, and no longer uses the default image, for that combination of OS version, Kubernetes version, and target infrastructure.

Cluster API

Cluster API (CAPI) is built on the principles of immutable infrastructure. All nodes that make up a cluster are derived from a common template or machine image.

When CAPI creates a cluster from a machine image, it expects several things to be configured, installed, and accessible or running, including:

  • The versions of kubeadm, kubelet and kubectl specified in the cluster manifest.
  • A container runtime, most often containerd.
  • All required images for kubeadm init and kubeadm join. You must include any images that are not published and must be pulled locally, as with VMware-signed images.
  • cloud-init configured to accept bootstrap instructions.

Custom Machine Images

This procedure walks you through building a Linux custom machine image to use when creating clusters on AWS, Azure, or vSphere. It is divided into the sections below:

Linux Image Prerequisites

To build a Linux custom machine image, you need:

  • An account on your target infrastructure, AWS, Azure, or vSphere.
  • A macOS or Linux workstation with the following installed:
    • Docker Desktop
    • For AWS: The aws command-line interface (CLI)
    • For Azure: The az CLI
    • For vSphere: To build a RHEL 7 image you need a Linux workstation, not macOS.

Build a Linux Image

  1. On AWS and Azure, log in to your infrastructure CLI. Authenticate and specify your region, if prompted:

    • AWS: Run aws configure.
    • Azure: Run az login.
  2. On vSphere, create a credentials JSON file and fill in its values:

    "cluster": "",
    "convert_to_template": "false",
    "create_snapshot": "true",
    "datacenter": "",
    "datastore": "",
    "folder": "",
    "insecure_connection": "false",
    "linked_clone": "true",
    "network": "",
    "password": "",
    "resource_pool": "",
    "template": "",
    "username": "",
    "vcenter_server": ""
  3. Determine the Image Builder configuration version that you want to build from.

    • Search the VMware {code} Sample Exchange for TKG Image Builder to list the available versions.
    • Each version corresponds to the Kubernetes version that Image Builder uses and may also include the version of Tanzu Kubernetes Grid. For example, builds a Kubernetes v1.21.8 image for Tanzu Kubernetes Grid v1.4.3.
    • If you need to create a management cluster, which you must do when you first install Tanzu Kubernetes Grid, choose the default Kubernetes version of your Tanzu Kubernetes Grid version. For example, in Tanzu Kubernetes Grid v1.4.3, the default Kubernetes version is v1.21.8.
  4. The Image Builder configurations have two different architectures and build instructions, based on their Kubernetes versions:

    After creating a custom image file following the v1.2 procedure, continue with Use a Custom Machine Image below. Do not follow the Tanzu Kubernetes Grid v1.2 procedure to add a reference to the custom image to a Bill of Materials (BoM) file.

  5. Download the configuration code zip file, and unpack its contents.

  6. cd into the TKG-Image-Builder- directory, so that the tkg.json file is in your current directory.

  7. Collect the following parameter strings to plug into the command in the next step. Many of these specify docker run -v parameters that copy your current working directories into the /home/imagebuilder directory of the container used to build the image.

    • AUTHENTICATION: Copies your local CLI directory:
      • AWS: Use ~/.aws:/home/imagebuilder/.aws
      • Azure: Use ~/.azure:/home/imagebuilder/.azure
      • vSphere: /PATH/TO/CREDENTIALS.json:/home/imagebuilder/vsphere.json
    • SOURCES: Copies the repo’s tkg.json file, which lists download sources for versioned OS, Kubernetes, container network interface (CNI). images:
      • Use /PATH/TO/tkg.json:/home/imagebuilder/tkg.json
    • ROLES: Copies the repo’s tkg directory, which contains Ansible roles required by Image Builder.
      • Use /PATH/TO/tkg:/home/imagebuilder/tkg
      • To add custom Ansible roles, edit the tkg.json file to reformat the custom_role_names setting with escaped quotes (\"), in order to make it a list with multiple roles. For example:
        "custom_role_names": "\"/home/imagebuilder/tkg /home/imagebuilder/mycustomrole\"",
    • TESTS: Copies a goss test directory designed for the image’s target infrastructure, OS, and Kubernetes verson:
      • Use the filename of a file in the repo’s goss directory, for example amazon-ubuntu-1.21.8+vmware.1-goss-spec.yaml.
    • CUSTOMIZATIONS: Copies a customizations file in JSON format. See Customization in the Image Builder documentation. Before making any modifications, consult with VMware Customer Reliability Engineering (CRE) for best practices and recommendations.
    • PACKER_VAR_FILES: A space-delimited list of the JSON files above that contain variables for Packer.
    • (Azure) AZURE-CREDS: Path to an Azure credentials file, as described in the Image Builder documentation.
    • COMMAND: Use a command like one of the following, based on the custom image OS. For vSphere and Azure images, the commands start with build-node-ova- and build-azure-sig-:
      • build-ami-ubuntu-2004: Ubuntu v20.04
      • build-ami-ubuntu-1804: Ubuntu v18.04
      • build-ami-amazon-2: Amazon Linux 2
  8. Using the strings above, run the Image Builder in a Docker container pulled from the VMware registry

    docker run -it --rm \
        -v SOURCES \
        -v ROLES \
        -v /PATH/TO/goss/TESTS.yaml:/home/imagebuilder/goss/goss.yaml \
        -v /PATH/TO/CUSTOMIZATIONS.json:/home/imagebuilder/CUSTOMIZATIONS.json \
        --env PACKER_VAR_FILES="tkg.json CUSTOMIZATIONS.json" \
        --env-file AZURE-CREDS \ \


    • Omit env-file if you are not building an image for Azure.
    • This command may take several minutes to complete.

    For example, to create a custom image with Ubuntu v20.04 and Kubernetes v1.21.8 to run on AWS, running from the directory that contains tkg.json:

    docker run -it --rm \
        -v ~/.aws:/home/imagebuilder/.aws \
        -v $(pwd)/tkg.json:/home/imagebuilder/tkg.json \
        -v $(pwd)/tkg:/home/imagebuilder/tkg \
        -v $(pwd)/goss/amazon-ubuntu-1.21.8+vmware.1-goss-spec.yaml:/home/imagebuilder/goss/goss.yaml \
        -v /PATH/TO/CUSTOMIZATIONS.json /home/imagebuilder/aws.json \
        --env PACKER_VAR_FILES="tkg.json aws.json" \ \

    For vSphere, you must use the custom container image created above. You must also set a version string that will match what you pass in your custom TKr in the later steps. While VMware published OVAs will have a version string like v1.21.8+vmware.1-tkg.1, it is recommended that the -tkg.1 be replaced with a string meaningful to your organization. To set this version string, define it in a metadata.json file like the following:

      "VERSION": "v1.21.8+vmware.1-myorg.0"

    When building OVAs, the .ova file is saved to the local filesystem of your workstation. Whatever folder you want those OVAs to be saved in should be mounted to /home/imagebuilder/output within the container. Then, create the OVA using the container image:

    docker run -it --rm \
      -v /PATH/TO/CREDENTIALS.json:/home/imagebuilder/vsphere.json \
      -v $(pwd)/tkg.json:/home/imagebuilder/tkg.json \
      -v $(pwd)/tkg:/home/imagebuilder/tkg \
      -v $(pwd)/goss/vsphere-ubuntu-1.21.8+vmware.1-goss-spec.yaml:/home/imagebuilder/goss/goss.yaml \
      -v $(pwd)/metadata.json:/home/imagebuilder/metadata.json \
      -v /PATH/TO/OVA/DIR:/home/imagebuilder/output \
      --env PACKER_VAR_FILES="tkg.json vsphere.json" \
      --env OVF_CUSTOM_PROPERTIES=/home/imagebuilder/metadata.json \ \

    RHEL: To build a RHEL OVA you need to use a Linux machine, not macOS, because Docker on macOS does not support the --network host option.
    You must also include additional flags in the docker run command above, so that the container mounts your RHEL ISO rather than pulling from a public URL, and so that it can access Red Hat Subscription Manager credentials to connect to vCenter:

      -v $(pwd)/isos/rhel-server-7.7-x86-64-dvd.iso:/rhel-server-7.7-x86-64-dvd.iso \
      --network host \


    • RHSM_USER and RHSM_PASS, are the user/password combination that registers licensed usage of the OS with Red Hat Subscription Manager to gain temporary access to RPM repositories.
    • You map your local RHEL ISO path, in $(pwd)/isos/rhel-sever-7.7-x86-64-dvd.iso in the example above, as an additional volume.

(Optional) Create a TKr for the Linux Image

To make your Linux image the default for future Kubernetes versions and manage it using all the options detailed in Deploy Tanzu Kubernetes Clusters with Different Kubernetes Versions, create a TKr based on it. Otherwise, skip to Use a Linux Image for a Workload Cluster below.

To create a TKr, you add it to the Bill of Materials (BoM) of the TKr for the image’s Kubernetes version. For example, to add a custom image that you built with Kubernetes v1.21.8, you modify the current ~/.config/tanzu/tkg/bom/tkr-bom-v1.21.8.yaml file.

  1. From your ~/.config/tanzu/tkg/bom/ directory, open the TKr BoM corresponding to your custom image’s Kubernetes version. For example with a filename like tkr-bom-v1.21.8+vmware.1-tkg.1.yaml for Kubernetes v1.21.8.

  2. In the BoM file, find the image definition blocks for your infrastructure: ova for vSphere, ami for AWS, and azure for Azure.

  3. Determine whether an existing definition block applies to your image’s OS, as listed by, .version, and .arch.

  4. If no existing block applies to your image’s osinfo, add a new block as follows. If an existing block does apply, replace its values as follows:

    • vSphere:
      • name: a unique name for your OVA that includes the OS version, like my-ubuntu-2004
      • version: follow existing version value format, but use the unique VERSION assigned in metadata.json when you created the OVA, for example v1.21.8+vmware.1-myorg.0.
    • AWS - for each region that you plan to use the custom image in:
      • id: follow existing id value format, but use a unique hex string at the end, for example ami-693a5e2348b25e428
    • Azure:
      • sku: a unique SKU for your image that includes the OS version, like my-k8s-1dot21dot2-ubuntu-2004

    If the BoM file defines images under regions, your new or modified custom image definition block must be listed first in its region. Within each region, the cluster creation process picks the first suitable image listed.

  5. Save the BoM file. If its filename includes a plus (+) character, save the modified file under a new filename that replaces the + with a triple dash (---). For example, tkr-bom-v1.21.8---vmware.1-tkg.1.yaml.

  6. base64-encode the file contents into a binary string, for example:

    cat tkr-bom-v1.21.8---vmware.1-tkg.1.yaml | base64 -w 0
  7. Create a ConfigMap YAML file in the tkr-system namespace, also without a + in its filename, and fill in values as shown:

    apiVersion: v1
    kind: ConfigMap
     name: CUSTOM-TKG-BOM
       tanzuKubernetesRelease: CUSTOM-TKR
     bomContent: BOM-BINARY-CONTENT


    • CUSTOM-TKG-BOM is the name of the ConfigMap YAML file, without the .yaml extension, such as my-custom-tkr-bom-v1.21.8---vmware.1-tkg.1
    • CUSTOM-TKR is a name for your TKr, such as my-custom-tkr-v1.21.8---vmware.1-tkg.1

    • BOM-BINARY-CONTENT is the base64-encoded content of your customized BoM file.

  8. Save the ConfigMap file, set the kubectl context to a management cluster you want to add TKr to, and apply the file to the cluster, for example:

    kubectl -n tkr-system apply -f my-custom-tkr-bom-v1.21.8---vmware.1-tkg.1.yaml
    • Once the ConfigMap is created, the TKr Controller reconciles the new object by creating a TanzuKubernetesRelease.
      The default reconciliation period is 600 seconds. You can avoid this delay by deleting the TKr Controller pod, which makes the pod restore and reconcile immediately:

      1. List pods in the tkr-system namespace:

        kubectl get pod -n tkr-system
      2. Retrieve the name of the TKr Controller pod, which looks like tkr-controller-manager-f7bbb4bd4-d5lfd

      3. Delete the pod:

        kubectl delete pod -n tkr-system TKG-CONTROLLER

      Where TKG-CONTROLLER is the name of the TKr Controller pod.

  9. To check that the custom TKr was added, run tanzu kubernetes-release get or kubectl get tkr or and look for the CUSTOM-TKR value set above in the output.

Once your custom TKr is listed by the kubectl and tanzu CLIs, you can use it to create management or workload clusters as described below.

Use a Linux Image for a Management Cluster

To create a management cluster that uses your custom image as the base OS for its nodes:

  1. Upload the image to your cloud provider.

  2. When you run the installer interface, select the custom image in the OS Image pane, as described in Select the Base OS Image.

For more information, see How Base OS Image Choices are Generated.

Use a Linux Image for a Workload Cluster

The procedure for creating a workload cluster from your Linux image differs depending on whether you created a TKr in (Optional) Create a TKr for the Linux Image above.

  • If you created a TKr, pass the TKr name as listed by tanzu kubernetes-release get to the --tkr option of tanzu cluster create.

  • If you did not create a TKr, follow these steps:

    1. Copy your management cluster configuration file and save it with a new name by following the procedure in Create a Tanzu Kubernetes Cluster Configuration File.

    2. In the new configuration file, add or modify the following:


      Where LINUX-IMAGE is the name of the Linux image you created in Build a Linux Image.

      Remove CLUSTER_NAME and its setting, if it exists.

    3. In a terminal, run:

      tanzu cluster create LINUX-CLUSTER --file LINUX-CLUSTER.yaml -v 9

      Where LINUX-CLUSTER is the name of your new configuration file.

      The output is similar to:

      workload cluster LINUX-CLUSTER created
    4. Retrieve the kubeconfig for the workload cluster.

      tanzu cluster kubeconfig get LINUX-CLUSTER --admin
    5. Set the context of kubectl to your workload cluster.

      kubectl config use-context LINUX-CLUSTER-admin@LINUX-CLUSTER
    6. To ensure your new workload cluster is using the Linux image, look under OS-IMAGE in the output of the following:

      kubectl get nodes -o wide

      The output is similar to:

      NAME                                          STATUS   ROLES    ...   OS-IMAGE
      wc-md-0-ubuntu-containerd-5559f4885c-9xwlq    Ready    <none>   ...   Ubuntu 20.04.2 LTS   
check-circle-line exclamation-circle-line close-line
Scroll to top icon