Deploy Tanzu Kubernetes Grid on vSphere Networking in an Air-Gapped Environment

VMware Tanzu Kubernetes Grid (TKG) provides a consistent, upstream-compatible, regional Kubernetes substrate that is ready for end-user workloads and ecosystem integrations.

An air-gap installation method is used when the Tanzu Kubernetes Grid bootstrapper and cluster nodes components are unable to connect to the Internet to download the installation binaries from the public VMware Registry during Tanzu Kubernetes Grid installation or upgrades.

The scope of this document is limited to providing deployment steps based on the reference design in Tanzu Kubernetes Grid on vSphere Networking. This document does not provide any deployment procedures for the underlying SDDC components.

Supported Component Matrix

The following table provides the component versions and interoperability matrix supported with the reference design:

Software Components Version
Tanzu Kubernetes Grid 1.6.0
VMware vSphere ESXi 7.0 U3
VMware vCenter Server 7.0 U3
NSX Advanced Load Balancer 21.1.4

For up-to-date interoperability information about other VMware products and versions, see the VMware Interoperability Matrix.

Prepare your Environment for Deploying Tanzu Kubernetes Grid

Before deploying the Tanzu Kubernetes Grid in the vSphere environment, ensure that your environment is set up as described in the following sections:

General Requirements

  • vSphere 7.0 U3 or greater instance with an Enterprise Plus license
  • Your SDDC environment has the following objects in place:
    • A vSphere cluster with at least 3 hosts, on which vSphere DRS is enabled. If you are using vSAN for shared storage, it is recommended that you use 4 ESXi hosts.
    • A distributed switch with port groups for Tanzu Kubernetes Grid components. Please refer to the Network Requirements section for the required port groups.
    • Dedicated resource pool in which to deploy the Tanzu Kubernetes Grid Instance.
    • VM folders in which to collect the Tanzu Kubernetes Grid VMs.
    • A shared datastore with sufficient capacity for the control plane and worker node VMs.
    • Network Time Protocol (NTP) service is running on all ESXi hosts and vCenter and time is synchronized from the centralized NTP servers.
    • A host/server/VM based on Linux that acts as your bastion host and is located outside the Internet-restricted environment (i.e. connected to the Internet). The installation binaries for Tanzu Kubernetes Grid and NSX Advanced Load Balancer will be downloaded on this machine. You will need to transfer files from this bastion host to your Internet-restricted environment (proxy connection, shared drive, USB drive, sneakernet, etc.).
    • A host/server/VM inside your Internet-restricted environment based on Linux/Windows which acts as your bootstrap machine and has Tanzu CLI, Kubectl and docker installed. This document makes use of a virtual machine based on CentOS. An internal Harbor registry will be installed on the same machine.
  • vSphere account with permissions as described in Required Permissions for the vSphere Account.

Note: You can also download and import supported older versions of Kubernetes in order to deploy workload clusters on the intended Kubernetes versions.

Resource Pools and VM Folders

The sample entries of the resource pools and folders that need to be created are as follows.

Resource Type Sample Resource Pool Name Sample Folder Name
NSX ALB Components nsx-alb-components nsx-alb-components
TKG Management components tkg-management-components tkg-management-components
TKG Shared Service Components tkg-sharedsvc-components tkg-sharedsvc-components
TKG Workload components tkg-workload01-components tkg-workload01-components

Network Requirements

Create Port groups on vSphere Distributed Switch for deploying Tanzu Kubernetes Grid components as defined in Network Requirements in the reference architecture.

Firewall Requirements

Ensure that the firewall is set up as described in Firewall Requirements.

Subnet and CIDR Examples

For the purpose of this demonstration, this document makes use of the following CIDR for TKO deployment. Please change the values to reflect your environment.

Network Type Port Group Name Gateway CIDR DHCP Pool NSX ALB IP Pool
NSX ALB Management Network nsx_alb_management_pg 172.16.10.1/24 N/A 172.16.10.100- 172.16.10.200
TKG Management Network tkg_mgmt_pg 172.16.40.1/24 172.16.40.100- 172.16.40.200 N/A
TKG Management VIP Network tkg_mgmt_vip_pg 172.16.50.1/24 N/A 172.16.50.100- 172.16.50.200
TKG Cluster VIP Network tkg_cluster_vip_pg 172.16.80.1/24 N/A 172.16.80.100- 172.16.80.200
TKG Workload VIP Network tkg_workload_vip_pg 172.16.70.1/24 N/A 172.16.70.100 - 172.16.70.200
TKG Workload Segment tkg_workload_pg 172.16.60.1/24 172.16.60.100- 172.16.60.200 N/A

Deployment Overview

Here are the high-level steps for deploying Tanzu Kubernetes Grid on vSphere VDS networking in an air-gap environment:

Deploy and Configure Bastion Host

The bastion host is the physical or virtual machine where you download the images and binaries required for Tanzu Kubernetes Grid installation from the Internet. You will then transfer the downloaded items to the bootstrap machine, which is located inside the air-gap environment.

Ensure the following: - A browser is available on the bastion host to download the binaries from the Internet. - The bastion host has the following hardware configuration: - CPU: 1 - Memory: 4 GB - Storage (HDD): 200 GB or greater.

Note: The following instructions are for CentOS 7. If you are using any other operating system for your bastion host, change the commands accordingly.

Prerequisites

  1. Download the binaries for Docker Engine and associated dependencies.

    ### Create a directory for collecting docker installation binaries
    
    mkdir docker-binaries && cd docker-binaries
    
    ### Add docker repository to the yum command
    
    yum install yum-utils -y
    
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    
    ### Download docker and associated dependencies 
    
    yumdownloader --resolve docker-ce docker-ce-cli containerd.io docker-compose-plugin
    

    The yumdownloader command downloads the following binaries:

    code screen captures displaying the binaries

  2. Download installation binaries from the Harbor GitHub repository.

  3. Download the NSX Advanced Load Balancer OVA from VMware Customer Connect portal.

  4. Download Tanzu CLI, Kubectl, and the Kubernetes OVA images from the Tanzu Kubernetes Grid product download page. Tanzu CLI and Plugins need to be installed on the bastion host and the bootstrap machine.

  5. Download the yq installation binary from mikefarah / yq GitHub repository.

  6. Download the gen-publish-images script for pulling Tanzu Kubernetes Grid installation binaries from the Internet.

Configure Bastion Host

  1. Install Tanzu CLI.

    tar -xvf tanzu-cli-bundle-linux-amd64.tar.gz
    cd ./cli/
    sudo install core/v0.25.0/tanzu-core-linux_amd64 /usr/local/bin/tanzu
    chmod +x /usr/local/bin/tanzu
    

    Run the tanzu version command to check that the correct version of tanzu is installed and executable.

    # tanzu version
    
    version: v0.25.0
    buildDate: 2022-08-25
    sha: 6288c751-dirty
    
  2. Install imgpkg.

    imgpkg is a tool that enables Kubernetes to store configurations and the associated container images as OCI images and to transfer these images.

    gunzip imgpkg-linux-amd64-v0.29.0+vmware.1.gz
    chmod ugo+x imgpkg-linux-amd64-v0.29.0+vmware.1 && mv ./imgpkg-linux-amd64-v0.29.0+vmware.1 /usr/local/bin/imgpkg
    
  3. Install the Tanzu CLI plugins.

    The Tanzu CLI plugins provides commands for Tanzu Kubernetes cluster management and feature operations.

    Running the tanzu init command for the first time installs the necessary Tanzu Kubernetes Grid configuration files in the ~/.config/tanzu/tkg folder on your system. The script that you create and run in subsequent steps requires the Bill of Materials (BoM) YAML files in the ~/.config/tanzu/tkg/bom folder to be present on your machine. The scripts in this procedure use the BoM files to identify the correct versions of the different Tanzu Kubernetes Grid component images to pull.

    # tanzu init
    
    Checking for required plugins...
    Installing plugin 'login:v0.25.0'
    Installing plugin 'management-cluster:v0.25.0'
    Installing plugin 'package:v0.25.0'
    Installing plugin 'pinniped-auth:v0.25.0'
    Installing plugin 'secret:v0.25.0'
    Installing plugin 'telemetry:v0.25.0'
    Successfully installed all required plugins
    ✔  Done
    
  4. After installing the Tanzu plugins, run the tanzu plugin list command to check the plugins version and installation status.

    # tanzu plugin list
    NAME                DESCRIPTION                                                        SCOPE       DISCOVERY  VERSION  STATUS
    login               Login to the platform                                              Standalone  default    v0.25.0  installed
    management-cluster  Kubernetes management-cluster operations                           Standalone  default    v0.25.0  installed
    package             Tanzu package management                                           Standalone  default    v0.25.0  installed
    pinniped-auth       Pinniped authentication operations (usually not directly invoked)  Standalone  default    v0.25.0  installed
    secret              Tanzu secret management                                            Standalone  default    v0.25.0  installed
    telemetry           Configure cluster-wide telemetry settings                          Standalone  default    v0.25.0  installed
    

    Validate the BOM files by listing the contents in the .config/tanzu/tkg/bom/ folder.

    ls .config/tanzu/tkg/bom/
    
    tkg-bom-v1.6.0.yaml  tkr-bom-v1.23.8+vmware.1-tkg.1.yaml
    
  5. Set the following environment variables.

    • IP address or FQDN of your local image registry.

      export TKG_CUSTOM_IMAGE_REPOSITORY="PRIVATE-REGISTRY"
      

      Where PRIVATE-REGISTRY is the IP address or FQDN of your private registry and the name of the project. For example, registry.example.com/library

    • The repository from which to fetch Bill of Materials (BoM) YAML files.

      export TKG_IMAGE_REPO="projects.registry.vmware.com/tkg"
      
      export TKG_BOM_IMAGE_TAG="v1.6.0"
      
    • If your private registry uses a self-signed certificate, provide the CA certificate of the registry in base64 encoded format. For example, base64 -w 0 your-ca.crt.

      export TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE=LS0t[...]tLS0tLQ==
      

      This CA certificate is automatically injected into all Tanzu Kubernetes clusters that you create in this Tanzu Kubernetes Grid instance.

    • (Optional) Define the Tanzu Kubernetes releases (TKrs) to download. By default, the download script retrieves container images used in Tanzu Kubernetes Grid versions v1.3.0 and later.

      List all TKrs and their associations with a Tanzu Kubernetes Grid releases.

      imgpkg pull -i ${TKG_IMAGE_REPO}/tkr-compatibility:v$(imgpkg tag list -i ${TKG_IMAGE_REPO}/tkr-compatibility |sed 's/v//' |sort -rn |head -1) --output "tkr-tmp"; cat tkr-tmp/tkr-compatibility.yaml; rm -rf tkr-tmp
      

      For your Tanzu Kubernetes Grid version, note the supported Kubernetes versions. The one with the latest minor version is used by the management cluster. For example the Tanzu Kubernetes Grid v1.6.0 management cluster uses TKr v1.23.8_vmware.1-tkg.1.

      Export as DOWNLOAD_TKRS a space-separated string of the TKrs required for your management cluster and workloads. For example, to download the images for Kubernetes v1.23 and v1.22 versions supported by Tanzu Kubernetes Grid v1.6.0:

      export DOWNLOAD_TKRS="v1.23.8_vmware.2-tkg.1 v1.22.11_vmware.2-tkg.1""
      
  6. Prepare and execute the scripts for pulling Tanzu Kubernetes Grid installation binaries.

    • Create a folder to collect Tanzu Kubernetes Grid installation binaries.

      mkdir -p /root/tkg-images && cd /root/tkg-images
      
    • Download the script gen-publish-images-totar.sh.

      wget https://raw.githubusercontent.com/vmware-tanzu/tanzu-framework/f07a8853e2462d96ec184abdb0a0a63b00d38a9a/hack/gen-publish-images-totar.sh
      
    • Make the gen-publish-images-totar.sh script executable.

      chmod +x gen-publish-images-totar.sh
      
    • Generate the images-to-tar-list file.

      ./gen-publish-images.sh > images-to-tar-list
      
  7. Run the download-images.sh script.

    • Create the script using the following code snippet to download the Tanzu Kubernetes Grid installation binaries.

      #!/bin/bash
      
      set -euo pipefail
      
      images_script=${1:-}
      if [ ! -f $images_script ]; then
      echo "You may add your images list filename as an argument."
      echo "E.g ./download-images.sh image-copy-list"
      fi
      
      commands="$(cat ${images_script} |grep imgpkg |sort |uniq)"
      
      while IFS= read -r cmd; do
      echo -e "\nrunning $cmd\n"
      until $cmd; do
          echo -e "\nDownload failed. Retrying....\n"
          sleep 1
      done
      done <<< "$commands"
      
    • Make the download-images script executable.

      chmod +x download-images.sh
      
    • Run the download-images.sh script on the images-to-tar-list file to pull the required images from the public Tanzu Kubernetes Grid registry and save them as a TAR file.

      ./download-images.sh images-to-tar-list
      

      After the script has finished execution, you will see the required Tanzu Kubernetes Grid binaries in TAR format in the directory tkg-images. You will move the binaries to the bootstrap machine which is running inside the air-gap environment.

  8. Generate the publish-images-fromtar.sh script.

    Run the script on the bootstrap machine after you have copied the download Tanzu Kubernetes Grid binaries onto the bootstrap VM. The script will copies the binaries from the bootstrap VM into the project in your private repository.

    1. Download the script named gen-publish-images-fromtar.sh.

      wget https://raw.githubusercontent.com/vmware-tanzu/tanzu-framework/e3de5b1557d9879dc814d771f431ce8945681c48/hack/gen-publish-images-fromtar.sh
      
    2. Make the gen-publish-images-fromtar.sh script executable.

      chmod +x gen-publish-images-fromtar.sh
      
    3. Generate a publish-images-fromtar.sh shell script that is populated with the address of your private Docker registry.

      ./gen-publish-images-fromtar.sh > publish-images-fromtar.sh
      
    4. Verify that the generated script contains the correct registry address.

      cat publish-images-fromtar.sh
      
    5. Transfer the generated publish-images-fromtar.sh script file to the bootstrap machine.
  9. Move the binaries you downloaded to the bootstrap VM.

Install Harbor Image Registry

Install the Harbor only if you don’t have any existing image repository in your environment.

To install Harbor, deploy an operating system of your choice with the following hardware configuration:

  • vCPU: 4
  • Memory: 8 GB
  • Storage (HDD): 160 GB

Copy the Harbor binary from the bootstrap VM to the Harbor VM. Follow the instructions provided in Harbor Installation and Configuration to deploy and configure Harbor.

Deploy and Configure Bootstrap VM

The deployment of the Tanzu Kubernetes Grid management and workload clusters is facilitated by setting up a bootstrap machine where you install the Tanzu CLI and Kubectl utilities which are used to create and manage the Tanzu Kubernetes Grid instance. This machine also keeps the Tanzu Kubernetes Grid and Kubernetes configuration files for your deployments. The bootstrap machine can be a laptop, host, or server running on Linux, macOS, or Windows that you deploy management and workload clusters from.

The bootstrap machine runs a local kind cluster when Tanzu Kubernetes Grid management cluster deployment is started. Once the kind cluster is fully initialized, the configuration is used to deploy the actual management cluster on the backend infrastructure. After the management cluster is fully configured, the local kind cluster is deleted and future configurations are performed with the Tanzu CLI.

For this deployment, a Photon-based virtual machine is used as the bootstrap machine. For information on how to configure a macOS or a Windows machine, see Install the Tanzu CLI and Other Tools.

The bootstrap machine must meet the following prerequisites:

  • A minimum of 6 GB of RAM and a 2-core CPU.
  • System time is synchronized with a Network Time Protocol (NTP) server.
  • Docker and containerd binaries are installed. For instructions on how to install Docker, see Docker documentation.
  • Ensure that the bootstrap VM is connected to Tanzu Kubernetes Grid management network, tkg_mgmt_pg.

To install Tanzu CLI, Tanzu Plugins, and Kubectl utility on the bootstrap machine, follow the instructions below:

  1. Download and unpack the following Linux CLI packages from VMware Tanzu Kubernetes Grid Download Product page.

    • VMware Tanzu CLI 1.6.0 for Linux
    • kubectl cluster CLI v1.23.8 for Linux
  2. Execute the following commands to install Tanzu Kubernetes Grid CLI, kubectl CLIs, and Carvel tools.

    ## Install required packages
    tdnf install tar zip unzip wget -y
    
    ## Install Tanzu Kubernetes Grid CLI
    tar -xvf tanzu-cli-bundle-linux-amd64.tar.gz
    cd ./cli/
    sudo install core/v0.25.0/tanzu-core-linux_amd64 /usr/local/bin/tanzu
    chmod +x /usr/local/bin/tanzu
    
    ## Verify Tanzu CLI version
    
     [root@tkg160-bootstrap ~] # tanzu version
    
    version: v0.25.0
    buildDate: 2022-08-25
    sha: 6288c751-dirty
    
  3. Install the kubectl utility.

    gunzip kubectl-linux-v1.23.8+vmware.2.gz
    mv kubectl-linux-v1.23.8+vmware.2 /usr/local/bin/kubectl && chmod +x /usr/local/bin/kubectl
    

    Run the kubectl version --short=true to check that the correct version of kubectl is installed and executable.

  4. Configure the environment variables.

    In an air-gap environment, if you run the tanzu init or tanzu plugin sync commands, the command hangs and times out after some time with the following error:

    [root@bootstrap ~]# tanzu init
    Checking for required plugins...
    unable to list plugin from discovery 'default': error while processing package: failed to get resource files from discovery: Checking if image is bundle: Fetching image: Get "https://projects.registry.vmware.com/v2/": dial tcp 10.188.25.227:443: i/o timeout
    All required plugins are already installed and up-to-date
    ✔  successfully initialized CLI
    
    [root@bootstrap ~]# tanzu plugin sync
    Checking for required plugins...
    unable to list plugin from discovery 'default': error while processing package: failed to get resource files from discovery: Checking if image is bundle: Fetching image: Get "https://projects.registry.vmware.com/v2/": dial tcp 10.188.25.227:443: i/o timeout
    All required plugins are already installed and up-to-date
    ✔  Done
    

    By default the Tanzu global config file, config.yaml, which gets created when you first run tanzu init command, points to the repository URL https://projects.registry.vmware.com to fetch the Tanzu plugins for installation. Because there is no Internet in the environment, the commands fails after some time.

    To ensure that Tanzu Kubernetes Grid always pulls images from the local private registry, run the Tanzu config set command to add TKG_CUSTOM_IMAGE_REPOSITORY to the global Tanzu CLI configuration file, ~/.config/tanzu/config.yaml.

    If your image registry is configured with a public signed CA certificate, set the following environment variables.

    tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY custom-image-repository.io/yourproject
    
    tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY true
    

    If your registry solution uses self-signed certificates, also add TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE in base64-encoded format to the global Tanzu CLI configuration file. For self-signed certificates, set the following environment variables:

    tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY custom-image-repository.io/yourproject
    
    tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY false
    
    tanzu config set env.TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE LS0t[...]tLS0tLQ==
    
    
  5. Initialize Tanzu Kubernetes Grid and Install Tanzu CLI plugins.

    ### Initialize Tanzu Kubernetes Grid 
    
    tanzu config init 
    
    ## (Optional) Remove existing plugins from any previous CLI installations.
    
    tanzu plugin clean
    
    tanzu plugin sync
    

    After installing the tanzu plugins, run the tanzu plugin list command to check the plugins version and installation status.

  6. Install Carvel tools.

    Tanzu Kubernetes Grid uses the following tools from the Carvel open-source project:

    • ytt - a command-line tool for templating and patching YAML files. You can also use ytt to collect fragments and piles of YAML into modular chunks for easy re-use.
    • kapp - the application deployment CLI for Kubernetes. It allows you to install, upgrade, and delete multiple Kubernetes resources as one application.
    • kbld - an image-building and resolution tool.
    • imgpkg - a tool that enables Kubernetes to store configurations and the associated container images as OCI images, and to transfer these images.

    • Install ytt

      cd ./cli
      
      gunzip ytt-linux-amd64-v0.41.1+vmware.1.gz
      
      mv ytt-linux-amd64-v0.41.1+vmware.1 /usr/local/bin/ytt
      
      chmod +x /usr/local/bin/ytt
      
      

      Run ytt --version to check that the correct version of ytt is installed and executable

    • Install kapp.

      gunzip kapp-linux-amd64-v0.49.0+vmware.1.gz
      
      mv kapp-linux-amd64-v0.49.0+vmware.1 /usr/local/bin/kapp
      
      chmod +x /usr/local/bin/kapp
      

      Run kapp --version to check that the correct version of kapp is installed and executable.

    • Install kbld.

      gunzip kbld-linux-amd64-v0.34.0+vmware.1.gz
      
      mv kbld-linux-amd64-v0.34.0+vmware.1 /usr/local/bin/kbld
      
      chmod +x /usr/local/bin/kbld
      

      Run kbld --version to check that the correct version of kbld is installed and executable.

    • Install imgpkg.

      gunzip imgpkg-linux-amd64-v0.29.0+vmware.1.gz
      mv imgpkg-linux-amd64-v0.29.0+vmware.1 /usr/local/bin/imgpkg
      chmod +x /usr/local/bin/imgpkg
      

      Run imgpkg --version to check that the correct version of imgpkg is installed and executable.

  7. Install yq.

    yq a lightweight and portable command-line YAML processor.

    wget https://github.com/mikefarah/yq/releases/download/v4.25.2/yq_linux_amd64.tar.gz
    
    tar -zxvf yq_linux_amd64.tar.gz
    
    mv yq_linux_amd64 /usr/local/bin/
    

    Run the yq -V command to check that the correct version of yq is installed and executable.

  8. Run the following commands to start the Docker service and enable it to start at boot. Photon OS has Docker installed by default.

    ## Check Docker service status
    systemctl status docker
    
    ## Start Docker Service
    systemctl start docker
    
    ## To start Docker Service at boot
    systemctl enable docker
    
  9. Execute the following commands to ensure that the bootstrap machine uses cgroup v1.

    docker info | grep -i cgroup
    
    ## You should see the following
    Cgroup Driver: cgroupfs
    
  10. Create an SSH key-pair.

    This is required for Tanzu CLI to connect to vSphere from the bootstrap machine. The public key part of the generated key will be passed during the Tanzu Kubernetes Grid management cluster deployment.

    ### Generate public/Private key pair.
    
    ssh-keygen -t rsa -b 4096 -C "[email protected]"
    
    ### Add the private key to the SSH agent running on your machine and enter the password you created in the previous step 
    
    ssh-add ~/.ssh/id_rsa 
    
    ### If the above command fails, execute "eval $(ssh-agent)" and then rerun the command.
    

    Make a note of the public key from the file $home/.ssh/id_rsa.pub. You need this while creating a config file for deploying the Tanzu Kubernetes Grid management cluster.

  11. If your bootstrap machine runs Linux or Windows Subsystem for Linux, and it has a Linux kernel built after the May 2021 Linux security patch, for example Linux 5.11 and 5.12 with Fedora, run the following command.

    sudo sysctl net/netfilter/nf_conntrack_max=131072
    
  12. Push Tanzu Kubernetes Grid installation binaries to your private image registry.

    Navigate to the folder which contains all Tanzu Kubernetes Grid binaries and the publish-images-fromtar.sh file that you have copied from the bastion host and then execute the following command to push the binaries to your private image registry.

    ### Make the publish-images-fromtar.sh script executable.
    
    chmod +x publish-images-fromtar.sh
    
    ### Execute the publish-images-fromtar.sh script
    
    sh publish-images-fromtar.sh
    

Now all the required packages are installed and required configurations are in place on the bootstrap virtual machine.

Import the Base Image Template in vCenter Server

Before you proceed with the management cluster creation, ensure that the base image template is imported into vSphere and is available as a template. To import a base image template into vSphere:

  1. Go to the Tanzu Kubernetes Grid downloads page and download a Tanzu Kubernetes Grid OVA for the cluster nodes.

  2. For the management cluster, this must be either Photon or Ubuntu based Kubernetes v1.23.8 OVA.

    Note: Custom OVA with a custom Tanzu Kubernetes release (TKr) is also supported, as described in Build Machine Images.

  3. For workload clusters, OVA can have any supported combination of OS and Kubernetes version, as packaged in a Tanzu Kubernetes release.

    Note: Make sure you download the most recent OVA base image templates in the event of security patch releases. You can find updated base image templates that include security patches on the Tanzu Kubernetes Grid product download page.

  4. In the vSphere client, right-click an object in the vCenter Server inventory and select Deploy OVF template.

  5. Select Local file, click the button to upload files, and select the downloaded OVA file on your local machine.

  6. Follow the installer prompts to deploy a VM from the OVA.

  7. Click Finish to deploy the VM. When the OVA deployment finishes, right-click the VM and select Template > Convert to Template.

    Note: Do not power on the VM before you convert it to a template.

  8. If using non administrator SSO account: In the VMs and Templates view, right-click the new template, select Add Permission, and assign the tkg-user to the template with the TKG role.

For information about how to create the user and role for Tanzu Kubernetes Grid, see Required Permissions for the vSphere Account.

Import NSX Advanced Load Balancer in Content Library

Create a content library following the instructions provided in Create a Library in VMware vSphere documentation. You will store the NSX Advanced Load Balancer OVA in the library.

To import the OVA into the content library, see Import Items to a Content Library.

Deploy and Configure NSX Advanced Load Balancer

NSX Advanced Load Balancer (ALB) is an enterprise-grade integrated load balancer that provides L4 - L7 load balancer support. It is recommended for vSphere deployments without NSX-T, or when there are unique scaling requirements.

NSX Advanced Load Balancer is deployed in Write Access Mode in the vSphere Environment. This mode grants NSX Advanced Load Balancer controllers full write access to vCenter that helps in automatically creating, modifying, and removing service engines (SEs) and other resources as needed to adapt to changing traffic needs.

For a production-grade deployment, it is recommended to deploy three instances of the NSX Advanced Load Balancer controller for high availability and resiliency.

The following table provides a sample IP address and FQDN set for the NSX Advanced Load Balancer controllers:

Controller Node IP Address FQDN
Node 1 Primary 172.16.10.11 alb-ctlr01.lab.vmw
Node 2 Secondary 172.16.10.12 alb-ctlr02.lab.vmw
Node 3 Secondary 172.16.10.13 alb-ctlr03.lab.vmw
HA Address 172.16.10.10 alb-ha.lab.vmw

Follow these steps to deploy and configure NSX Advanced Load Balancer:

  1. Deploy NSX Advanced Load Balancer
  2. NSX Advanced Load Balancer: Initial setup
  3. NSX Advanced Load Balancer: Licensing
  4. NSX Advanced Load Balancer: Controller High Availability
  5. NSX Advanced Load Balancer: Certificate Management
  6. NSX Advanced Load Balancer: Create vCenter Cloud and SE Groups
  7. NSX Advanced Load Balancer: Configure Network and IPAM & DNS Profiles

Deploy NSX Advanced Load Balancer

As part of the prerequisites, you must have the NSX Advanced Load Balancer 21.1.4 OVA downloaded and imported to the content library. Deploy the NSX Advanced Load Balancer under the resource pool “nsx-alb-components” and place it under the folder “nsx-alb-components”.

To deploy NSX Advanced Load Balancer, complete the following steps.

  1. Log in to vCenter and go to Home > Content Libraries.
  2. Select the content library under which the NSX Advanced Load Balancer OVA is placed.
  3. Click on OVA & OVF Templates.
  4. Right-click the NSX Advanced Load Balancer image and select New VM from this Template.
  5. On the Select name and folder page, enter a name and select a folder for the NSX Advanced Load Balancer VM as nsx-alb-components.
  6. On the Select a compute resource page, select the resource pool nsx-alb-components.
  7. On the Review details page, verify the template details and click Next.
  8. On the Select storage page, select a storage policy from the VM Storage Policy drop-down menu and choose the  datastore location where you want to store the virtual machine files.
  9. On the Select networks page, select the network nsx_alb_management_pg and click Next.
  10. On the Customize template page, provide the NSX Advanced Load Balancer management network details such as IP address, subnet mask, and gateway, and click Next.
  11. On the Ready to complete page, review the page and click Finish.

    Deployment of NSX Advanced Load Balancer

A new task for creating the virtual machine appears in the Recent Tasks pane. After the task is complete, the NSX Advanced Load Balancer virtual machine is created on the selected resource. Power on the virtual machine and give it a few minutes for the system to boot. Upon successful boot up, go to NSX Advanced Load Balancer on your browser.
Note: While the system is booting up, a blank web page or a 503 status code may appear.

NSX Advanced Load Balancer: Initial Setup

After NSX Advanced Load Balancer is successfully deployed and running, go to NSX Advanced Load Balancer on your browser using the URL https://<IP/FQDN> and configure the basic system settings:

  1. Set admin password and click on Create Account.

    Set admin password and create account

  2. On the Welcome page, under System Settings, set backup passphrase and provide DNS information, and click Next.

    Set backup passphrase and provide DNS information

  3. Under Email/SMTP, provide email and SMTP information, and click Next.

    Provide email and SMTP information

  4. Under Multi-Tenant, configure settings as follows and click Save.

    • IP Route Domain: Share IP route domain across tenants
    • Service Engines are managed within the: Provider (Shared across tenants)
    • Tenant Access to Service Engine: Read Access

    Configure multi-tenant settings

If you did not select the Setup Cloud After option before saving, the initial configuration wizard exits. The Cloud configuration window does not automatically launch and you are directed to a dashboard view on the controller.

NSX Advanced Load Balancer: NTP Configuration

To configure NTP, go to Administration > Settings > DNS/NTP > Edit and add your NTP server details and click Save.

Note: You may also delete the default NTP servers.

NTP server configuration

NSX Advanced Load Balancer: Licensing

This document focuses on enabling NSX Advanced Load Balancer using the license model: Enterprise License (VMware NSX ALB Enterprise).

  1. To configure licensing, go to Administration > Settings > Licensing and click on the gear icon to change the license type to Enterprise.

    License configuration - change licensing type

  2. Select Enterprise Tier as the license type and click Save

    License configuration - select Enterprise tier

  3. Once the license tier is changed, apply the NSX Advanced Load Balancer Enterprise license key. If you have a license file instead of a license key, apply the license by clicking on the Upload a License File(.lic) option.

    License configuration - apply license

NSX Advanced Load Balancer: Controller High Availability

In a production environment, it is recommended to deploy additional controller nodes and configure the controller cluster for high availability and disaster recovery. Adding 2 additional nodes to create a 3-node cluster provides node-level redundancy for the controller and also maximizes performance for CPU-intensive analytics functions.

To run a 3-node controller cluster, you deploy the first node and perform the initial configuration, and set the cluster IP address. After that, you deploy and power on two more controller VMs, but you must not run the initial configuration wizard or change the admin password for these controllers VMs. The configuration of the first controller VM is assigned to the two new controller VMs.

The first controller of the cluster receives the Leader role. The second and third controllers work as Follower.

Complete the following steps to configure NSX Advanced Load Balancer cluster.

  1. Log in to the primary NSX Advanced Load Balancer controller and go to Administrator > Controller > Nodes, and click Edit.

    Configure NSX Advanced Load Balancer controller node

  2. Specify Name and Controller Cluster IP, and click Save. This IP address must be from the NSX Advanced Load Balancer management network.

    Specify NSX Advanced Load Balancer controller node name and IP address

  3. Deploy the 2nd and 3rd NSX Advanced Load Balancer controller nodes by using steps in Deploy NSX Advanced Load Balancer.

  4. Log into the primary NSX Advanced Load Balancer controller using the Controller Cluster IP/FQDN and go to Administrator > Controller > Nodes, and click Edit. The Edit Controller Configuration popup appears.

  5. In the Cluster Nodes field, enter the IP address for the 2nd and 3rd controller, and click Save.

    Cluster node IP address in controller configuration

    After you complete these steps, the primary NSX Advanced Load Balancer controller becomes the leader for the cluster and invites the other controllers to the cluster as members.

    NSX Advanced Load Balancer then performs a warm reboot of the cluster. This process can take approximately 10-15 minutes. You are automatically logged out of the controller node where you are currently logged in. Enter the cluster IP address in the browser, to see details about the cluster formation task.

    Controller initialization

The configuration of the primary (leader) controller is synchronized to the new member nodes when the cluster comes online following the reboot. After the cluster is successfully formed, you can see the following status:

Controller status

Note: In the following tasks, all NSX Advanced Load Balancer configurations are done by connecting to the NSX ALB Controller Cluster IP/FQDN.

NSX Advanced Load Balancer: Certificate Management

The default system-generated controller certificate generated for SSL/TSL connections does not have the required subject alternate name (SAN) entries. Complete the following steps to create a controller certificate:

  1. Log in to the NSX Advanced Load Balancer controller and go to Templates > Security > SSL/TLS Certificates.

  2. Click Create and select Controller Certificate. You can either generate a self-signed certificate, generate CSR, or import a certificate. For the purpose of this document, a self-signed certificate is generated.

  3. Provide all required details as per your infrastructure requirements and in the Subject Alternate Name (SAN) field, provide IP address and FQDN of all NSX Advanced Load Balancer controllers including NSX Advanced Load Balancer cluster IP and FQDN, and click Save.

    Self-signed certificate - General tab Self-signed certificate - Certificate tab

  4. After the certificate is created, capture the certificate contents as this is required while deploying the Tanzu Kubernetes Grid management cluster. To capture the certificate content, click on the Download icon next to the certificate, and click Copy to clipboard under Certificate.

    Copy certificate contents

  5. To replace the certificate, go to Administration > Settings > Access Settings, and click the pencil icon at the top right to edit the system access settings, and then replace the SSL/TSL certificate and click Save.

    Replace certificate

  6. Log out and log in to NSX Advanced Load Balancer.

NSX Advanced Load Balancer: Create vCenter Cloud and SE Groups

NSX Advanced Load Balancer can be deployed in multiple environments for the same system. Each environment is called a cloud. The following procedure provides steps on how to create a VMware vCenter cloud, and as shown in the architecture two service engine (SE) groups are created.

Service Engine Group 1: Service engines part of this service engine group hosts:

  • Virtual services that load balances control plane nodes of Management Cluster and Shared services cluster.
  • Virtual services for all load balancer functionalities requested by Tanzu Kubernetes Grid management cluster and Shared services cluster.

Service Engine Group 2: Service engines part of this service engine group hosts virtual services that load balances control plane nodes & virtual services for all load balancer functionalities requested by the workload clusters mapped to this SE group.

Note:

  • Based on your requirements, you can create additional SE groups for the workload clusters.
  • Multiple workload clusters can be mapped to a single SE group.
  • A Tanzu Kubernetes Grid cluster can be mapped to only one SE group for application load balancer services.
  • Control plane VIP for the workload clusters is placed on the respective SE group assigned through AKO Deployment Config (ADC) during cluster creation.

For information about mapping a specific service engine group to Tanzu Kubernetes Grid workload cluster, see Configure NSX Advanced Load Balancer in Tanzu Kubernetes Grid Workload Cluster.

The following components are created in NSX Advanced Load Balancer.

Object Sample Name
vCenter Cloud tanzu-vcenter01
Service Engine Group 1 tanzu-mgmt-segroup-01
Service Engine Group 2 tanzu-wkld-segroup-01

  1. Log in to NSX Advanced Load Balancer and go to Infrastructure > Clouds > Create > VMware vCenter/vSphere ESX.

    Create vCenter Cloud

  2. Enter cloud name and click Next.

    Enter cloud name

  3. Under the Infrastructure pane, enter vCenter address, username, and password, set Access Permission to Write and click Next.

    Enter infrastructure information for vCenter Cloud

  4. Under the Data Center pane, choose the data center for NSX Advanced Load Balancer to discover infrastructure resources.

    Select data center

  5. Under the Network pane, choose the NSX Advanced Load Balancer management network for service engines and enter a static IP address pool for SEs and VIP, and click Complete. Enter network information

  6. Wait for the cloud to get configured and the status to turn green.

    Wait for completion

  7. To create a service engine group for Tanzu Kubernetes Grid management clusters, navigate to Infrastructure > Cloud Resources > Service Engine Group tab, under Select Cloud choose the cloud created in the previous step, and click Create.

  8. Enter a name for the Tanzu Kubernetes Grid management service engine group and set the following parameters:

    Parameter Value
    High availability mode N+M (buffer)
    Memory per Service Engine 4
    vCPU per Service Engine 2

    Use the default values for the rest of the parameters.

    Create service engine group - basic settings

    For advanced configuration, click on the Advanced tab, specify a specific cluster and datastore for service engine placement, change the AVI SE folder name, and service engine name prefix, and click Save.

    Create service engine group - advanced settings

  9. Repeat steps 7 and 8 to create another service engine group for Tanzu Kubernetes Grid workload clusters. After completing this step, you will have created two service engine groups.
    Service engine groups created

NSX Advanced Load Balancer: Configure Network and IPAM Profile

Configure Tanzu Kubernetes Grid Networks in NSX Advanced Load Balancer

As part of the cloud creation in NSX Advanced Load Balancer, only management network is configured in NSX Advanced Load Balancer. Complete the following steps to configure these networks:

  • TKG Management Network
  • TKG Workload Network
  • TKG Cluster VIP/Data Network
  • TKG Management VIP/Data Network
  • TKG Workload VIP/Data Network

  • Log in to NSX Advanced Load Balancer and go to Infrastructure > Cloud Resources > Networks.

  • Select the desired cloud. All the networks available in vCenter are listed.

  • Click on the edit icon next for the network and configure as follows. Change the provided details as per your SDDC configuration.

    Note: Not all networks are auto-discovered. For those networks, manually add the subnet.

    Network Name DHCP Subnet Static IP Pool
    tkg_mgmt_pg Yes 172.16.40.0/24 NA
    tkg_workload_pg Yes 172.16.60.0/24 NA
    nsx_alb_management-pg No 172.16.10.0/24 172.16.10.100 - 172.16.10.200
    tkg_cluster_vip_pg No 172.16.80.0/24 172.16.80.100 - 172.16.80.200
    tkg_mgmt_vip_pg No 172.16.50.0/24 172.16.50.100 - 172.16.50.200
    tkg_workload_vip_pg No 172.16.70.0/24 172.16.70.100 - 172.16.70.200

    The following snippet shows an example network configuration: tkg_cluster_vip_pg

    Change network settings

    After the networks are configured, the configuration must look like the following image. Network list after configuration

Create IPAM and DNS Profile in NSX Advanced Load Balancer and Attach it to Cloud

At this point, all the required networks related to Tanzu functionality are configured in NSX Advanced Load Balancer, except for Tanzu Kubernetes Grid management and workload network which uses DHCP. NSX Advanced Load Balancer provides IPAM service for Tanzu Kubernetes Grid cluster VIP network, management VIP network, and workload VIP network.

Complete the following steps to create an IPAM profile and attach it to the vCenter cloud created earlier.

  1. Log in to NSX Advanced Load Balancer and go to Templates > Profiles > IPAM/DNS Profiles > Create > IPAM Profile, provide the following details, and click Save.

    Parameter Value
    Name tanzu-vcenter-ipam-01
    Type AVI Vintage IPAM
    Cloud for Usable Networks Tanzu-vcenter-01 (created earlier in this deployment)
    Usable Networks tkg_cluster_vip_pg
    tkg_mgmt_vip_pg
    tkg_workload_vip_pg

    Create IPAM profile

  2. Click Create > DNS Profile and provide the domain name.

    Enter domain name

  3. Attach the IPAM and DNS profiles to the tanzu-vcenter-01 cloud.

    1. Navigate to Infrastructure > Clouds.
    2. Edit the tanzu-vcenter-01 cloud.
    3. Under IPAM/DNS section, choose the IPAM and DNS profiles created earlier and save the updated configuration.
      Select IPAM and DNS profiles

This completes the NSX Advanced Load Balancer configuration. The next step is to deploy and configure a bootstrap machine. The bootstrap machine is used to deploy and manage Tanzu Kubernetes clusters.

Deploy Tanzu Kubernetes Grid Management Cluster

The management cluster is a Kubernetes cluster that runs Cluster API operations on a specific cloud provider to create and manage workload clusters on that provider.

The management cluster is also where you configure the shared and in-cluster services that the workload clusters use.

You may deploy management clusters in the following ways:

  • Run the Tanzu Kubernetes Grid installer, a wizard interface that guides you through the process of deploying a management cluster.

  • Create and edit YAML configuration files, and use them with Tanzu CLI commands to deploy a management cluster.

Before creating a management cluster using the Tanzu CLI, you must define its configuration in a YAML configuration file that provides the base configuration for the cluster. When you deploy the management cluster from the CLI, you specify the YAML file by using the --file option of the tanzu mc create command.

In an air-gap environment, we recommend deploying a management cluster using a YAML configuration file. You can use the templates provided in the following section to deploy management clusters on vSphere.

Management Cluster Configuration Template

The templates include all of the options that are relevant to deploying management clusters on vSphere. You can copy this template and use it to deploy management clusters to vSphere.

Important: The environment variables that you have set, override values from a cluster configuration file. To use all settings from a cluster configuration file, remove any conflicting environment variables before you deploy the management cluster from the CLI.

#! ---------------------------------------------------------------------
#! Basic cluster creation configuration
#! ---------------------------------------------------------------------

CLUSTER_NAME:
CLUSTER_PLAN: <dev/prod>
INFRASTRUCTURE_PROVIDER: vsphere
ENABLE_CEIP_PARTICIPATION: <true/false>
ENABLE_AUDIT_LOGGING: <true/false>
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13
# CAPBK_BOOTSTRAP_TOKEN_TTL: 30m

#! ---------------------------------------------------------------------
#! vSphere configuration
#! ---------------------------------------------------------------------

VSPHERE_SERVER:
VSPHERE_USERNAME:
VSPHERE_PASSWORD:
VSPHERE_DATACENTER:
VSPHERE_RESOURCE_POOL:
VSPHERE_DATASTORE:
VSPHERE_FOLDER:
VSPHERE_NETWORK: <tkg-management-network>
VSPHERE_CONTROL_PLANE_ENDPOINT: #Leave blank as VIP network is configured in NSX ALB and IPAM is configured with VIP network

# VSPHERE_TEMPLATE:

VSPHERE_SSH_AUTHORIZED_KEY:
VSPHERE_TLS_THUMBPRINT:
VSPHERE_INSECURE: <true/false>
DEPLOY_TKG_ON_VSPHERE7: true

#! ---------------------------------------------------------------------
#! Node configuration
#! ---------------------------------------------------------------------

# SIZE:
# CONTROLPLANE_SIZE:
# WORKER_SIZE:
# OS_NAME: ""
# OS_VERSION: ""
# OS_ARCH: ""
# VSPHERE_NUM_CPUS: 2
# VSPHERE_DISK_GIB: 40
# VSPHERE_MEM_MIB: 4096
# VSPHERE_CONTROL_PLANE_NUM_CPUS: 2
# VSPHERE_CONTROL_PLANE_DISK_GIB: 40
# VSPHERE_CONTROL_PLANE_MEM_MIB: 8192
# VSPHERE_WORKER_NUM_CPUS: 2
# VSPHERE_WORKER_DISK_GIB: 40
# VSPHERE_WORKER_MEM_MIB: 4096

#! ---------------------------------------------------------------------
#! NSX Advanced Load Balancer configuration
#! ---------------------------------------------------------------------

AVI_CA_DATA_B64: 
AVI_CLOUD_NAME: 
AVI_CONTROL_PLANE_HA_PROVIDER: <true/false>
AVI_CONTROL_PLANE_NETWORK: 
AVI_CONTROL_PLANE_NETWORK_CIDR: 
AVI_CONTROLLER: 
AVI_DATA_NETWORK: 
AVI_DATA_NETWORK_CIDR: 
AVI_ENABLE: <true/false>
AVI_LABELS: 
AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_CIDR: 
AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_NAME: 
AVI_MANAGEMENT_CLUSTER_SERVICE_ENGINE_GROUP: 
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR: 
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME: 
AVI_PASSWORD: <base 64 encoded AVI password>
AVI_SERVICE_ENGINE_GROUP: 
AVI_USERNAME: 


#! ---------------------------------------------------------------------
#! Image repository configuration
#! ---------------------------------------------------------------------

TKG_CUSTOM_IMAGE_REPOSITORY: ""
TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: false
TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: ""

#! ---------------------------------------------------------------------
#! Machine Health Check configuration
#! ---------------------------------------------------------------------

ENABLE_MHC:
# ENABLE_MHC_CONTROL_PLANE: <true/false>
# ENABLE_MHC_WORKER_NODE: <true/flase>

#! ---------------------------------------------------------------------
#! Identity management configuration
#! ---------------------------------------------------------------------

IDENTITY_MANAGEMENT_TYPE: "none"

#! Settings for IDENTITY_MANAGEMENT_TYPE: "oidc"
# CERT_DURATION: 2160h
# CERT_RENEW_BEFORE: 360h
# OIDC_IDENTITY_PROVIDER_CLIENT_ID:
# OIDC_IDENTITY_PROVIDER_CLIENT_SECRET:
# OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: groups
# OIDC_IDENTITY_PROVIDER_ISSUER_URL:
# OIDC_IDENTITY_PROVIDER_SCOPES: "email,profile,groups"
# OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: email

#! Settings for IDENTITY_MANAGEMENT_TYPE: "ldap"
# LDAP_BIND_DN:
# LDAP_BIND_PASSWORD:
# LDAP_HOST:
# LDAP_USER_SEARCH_BASE_DN:
# LDAP_USER_SEARCH_FILTER:
# LDAP_USER_SEARCH_USERNAME: userPrincipalName
# LDAP_USER_SEARCH_ID_ATTRIBUTE: DN
# LDAP_USER_SEARCH_EMAIL_ATTRIBUTE: DN
# LDAP_USER_SEARCH_NAME_ATTRIBUTE:
# LDAP_GROUP_SEARCH_BASE_DN:
# LDAP_GROUP_SEARCH_FILTER:
# LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
# LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE:
# LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn
# LDAP_ROOT_CA_DATA_B64:

For a full list of configurable values and to learn more about the fields present in the template file, see Tanzu Configuration File Variable Reference.

Create a file using the values provided in the template and save the file with a .yaml extension. See Appendix Section for a sample YAML file to use for deploying a management cluster.

After you have created or updated the cluster configuration file, you can deploy a management cluster by running the tanzu mc create --file CONFIG-FILE command, where CONFIG-FILE is the name of the configuration file. Below is the sample config file for deploying the TKG Management cluster in an air-gapped environment.

#! ---------------------------------------------------------------------
#! Basic cluster creation configuration
#! ---------------------------------------------------------------------

CLUSTER_NAME: tkg160-mgmt-airgap
CLUSTER_PLAN: prod
INFRASTRUCTURE_PROVIDER: vsphere
ENABLE_CEIP_PARTICIPATION: "true"
ENABLE_AUDIT_LOGGING: "true"
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13
# CAPBK_BOOTSTRAP_TOKEN_TTL: 30m

#! ---------------------------------------------------------------------
#! vSphere configuration
#! ---------------------------------------------------------------------

VSPHERE_SERVER: vcenter.lab.vmw
VSPHERE_USERNAME: [email protected]
VSPHERE_PASSWORD: <encoded:Vk13YXJlMSE=>
VSPHERE_DATACENTER: /tkgm-internet-dc1
VSPHERE_RESOURCE_POOL: /tkgm-internet-dc1/host/tkgm-internet-c1/Resources/tkg-management-components
VSPHERE_DATASTORE: /tkgm-internet-dc1/datastore/vsanDatastore
VSPHERE_FOLDER: /tkgm-internet-dc1/vm/tkg-management-components
VSPHERE_NETWORK: /tkgm-internet-dc1/network/tkg_mgmt_pg
VSPHERE_CONTROL_PLANE_ENDPOINT: #Leave blank as VIP network is configured in NSX ALB and IPAM is configured with VIP network

# VSPHERE_TEMPLATE:

VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAA[...]== [email protected]
VSPHERE_TLS_THUMBPRINT: DC:FA:81:1D:CA:08:21:AB:4E:15:BD:2B:AE:12:2C:6B:CA:65:49:B8
VSPHERE_INSECURE: "false"
DEPLOY_TKG_ON_VSPHERE7: true

#! ---------------------------------------------------------------------
#! Node configuration
#! ---------------------------------------------------------------------

OS_NAME: photon
OS_VERSION: "3"
OS_ARCH: amd64
VSPHERE_CONTROL_PLANE_NUM_CPUS: 2
VSPHERE_CONTROL_PLANE_DISK_GIB: 40
VSPHERE_CONTROL_PLANE_MEM_MIB: 8192
VSPHERE_WORKER_NUM_CPUS: 2
VSPHERE_WORKER_DISK_GIB: 40
VSPHERE_WORKER_MEM_MIB: 8192

#! ---------------------------------------------------------------------
#! NSX Advanced Load Balancer configuration
#! ---------------------------------------------------------------------

AVI_CA_DATA_B64: LS0t[...]tLS0tLQ==
AVI_CLOUD_NAME: tanzu-vcenter01
AVI_CONTROL_PLANE_HA_PROVIDER: "true"
AVI_CONTROL_PLANE_NETWORK: tkg_cluster_vip_pg
AVI_CONTROL_PLANE_NETWORK_CIDR: 172.16.80.0/24
AVI_CONTROLLER: alb-ha.lab.vmw
AVI_DATA_NETWORK: tkg_workload_vip_pg
AVI_DATA_NETWORK_CIDR: 172.16.70.0/24
AVI_ENABLE: "true"
AVI_LABELS: 
AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_CIDR: 172.16.80.0/24
AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_NAME: tkg_cluster_vip_pg
AVI_MANAGEMENT_CLUSTER_SERVICE_ENGINE_GROUP: tanzu-mgmt-segroup-01
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR: 172.16.50.0/24
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME: tkg_mgmt_vip_pg
AVI_PASSWORD: <encoded:Vk13YXJlMSE=>
AVI_SERVICE_ENGINE_GROUP: tanzu-wkld-segroup-01
AVI_USERNAME: admin


#! ---------------------------------------------------------------------
#! Image repository configuration
#! ---------------------------------------------------------------------

TKG_CUSTOM_IMAGE_REPOSITORY: "harbor-sa.lab.vmw/tkg-160"
TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: false
TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: LS0t[...]tLS0tLQ==

#! ---------------------------------------------------------------------
#! Machine Health Check configuration
#! ---------------------------------------------------------------------

ENABLE_MHC: true

#! ---------------------------------------------------------------------
#! Identity management configuration
#! ---------------------------------------------------------------------

IDENTITY_MANAGEMENT_TYPE: "none"

#! ---------------------------------------------------------------------

The cluster deployment logs are streamed in the terminal when you run the tanzu mc create command. The first run of tanzu mc create takes longer than subsequent runs because it has to pull the required Docker images into the image store on your bootstrap machine. Subsequent runs do not require this step, and thus the process is faster.

While the cluster is being deployed, you will find that a virtual service is created in NSX Advanced Load Balancer and new service engines are deployed in vCenter by NSX Advanced Load Balancer. The service engines are mapped to the SE Group tanzu-mgmt-segroup-01​.

Now you can access the Tanzu Kubernetes Grid management cluster from the bootstrap machine and perform additional tasks such as verifying the management cluster health and deploying the workload clusters.

To get the status of the Tanzu Kubernetes Grid management cluster execute the following command:

tanzu management-cluster get

TKG management cluster status

To interact with the management cluster using the kubectl command, retrieve the management cluster kubeconfig and switch to the cluster context to run kubectl commands.

# tanzu mc kubeconfig get --admin
Credentials of cluster 'tkg160-mgmt-airgap' have been saved
You can now access the cluster by running 'kubectl config use-context tkg160-mgmt-airgap-admin@tkg160-mgmt-airgap'


]# kubectl config use-context tkg160-mgmt-airgap-admin@tkg160-mgmt-airgap
Switched to context "tkg160-mgmt-airgap-admin@tkg160-mgmt-airgap".

]# kubectl get nodes
NAME                                       STATUS   ROLES                  AGE   VERSION
tkg160-mgmt-airgap-control-plane-k7t7m     Ready    control-plane,master   44m   v1.23.8+vmware.2
tkg160-mgmt-airgap-control-plane-q568p     Ready    control-plane,master   26m   v1.23.8+vmware.2
tkg160-mgmt-airgap-control-plane-xh2zd     Ready    control-plane,master   17m   v1.23.8+vmware.2
tkg160-mgmt-airgap-md-0-5c46d59854-9f9zp   Ready    <none>                 32m   v1.23.8+vmware.2
tkg160-mgmt-airgap-md-1-d969dbf8b-j49mg    Ready    <none>                 32m   v1.23.8+vmware.2
tkg160-mgmt-airgap-md-2-c67f865d8-4sj8p    Ready    <none>                 32m   v1.23.8+vmware.2

# kubectl get apps -A
NAMESPACE    NAME                                DESCRIPTION           SINCE-DEPLOY   AGE
tkg-system   ako-operator                        Reconcile succeeded   2m42s          15m
tkg-system   antrea                              Reconcile succeeded   22s            15m
tkg-system   load-balancer-and-ingress-service   Reconcile succeeded   47s            11m
tkg-system   metrics-server                      Reconcile succeeded   2m58s          15m
tkg-system   secretgen-controller                Reconcile succeeded   2m8s           15m
tkg-system   tanzu-addons-manager                Reconcile succeeded   2m59s          33m
tkg-system   tanzu-core-management-plugins       Reconcile succeeded   12m            32m
tkg-system   tanzu-featuregates                  Reconcile succeeded   92s            32m
tkg-system   vsphere-cpi                         Reconcile succeeded   86s            15m
tkg-system   vsphere-csi                         Reconcile succeeded   77s            15m

The Tanzu Kubernetes Grid management cluster is successfully deployed. You can now proceed with configuring custom ADCs and creating shared services & workload clusters.

Configure AKO Deployment Config (ADC) for Workload Clusters

Tanzu Kubernetes Grid v1.6.x management clusters with NSX Advanced Load Balancer are deployed with 2 AKODeploymentConfigs.

  • install-ako-for-management-cluster: default configuration for management cluster
  • install-ako-for-all: default configuration for all workload clusters. By default, all the workload clusters reference this file for their virtual IP networks and service engine (SE) groups. This ADC configuration does not enable NSX L7 Ingress by default.

As per this Tanzu deployment, create 2 more ADCs:

  • tanzu-ako-for-shared: Used by shared services cluster to deploy the virtual services in TKG Mgmt SE Group and the loadbalancer applications in TKG Management VIP Network.

  • tanzu-ako-for-workload-L7-ingress: Use this ADC only if you would like to enable NSX Advanced Load Balancer L7 ingress on workload cluster. Otherwise, leave the cluster labels empty to apply the network configuration from default ADC install-ako-for-all.

Configure AKO Deployment Config (ADC) for Shared Services Cluster

As per the defined architecture, shared services cluster uses the same control plane and data plane network as the management cluster. Shared services cluster control plane endpoint uses TKG Cluster VIP Network, application loadbalancing uses TKG Management Data VIP network and the virtual services are deployed in tanzu-mgmt-segroup-01 SE group. This configuration is enforced by creating a custom AKODeploymentConfig (ADC) and applying the respective AVI_LABELS while deploying the shared services cluster.

The format of the AKODeploymentConfig YAML file is as follows.

apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
kind: AKODeploymentConfig
metadata:
  finalizers:
     - ako-operator.networking.tkg.tanzu.vmware.com
  generation: 2
  name: <Unique name of AKODeploymentConfig>
spec:
  adminCredentialRef:
    name: nsx-alb-controller-credentials
    namespace: tkg-system-networking
  certificateAuthorityRef:
    name: nsx-alb-controller-ca
    namespace: tkg-system-networking
  cloudName: <NAME OF THE CLOUD in ALB>
  clusterSelector:
    matchLabels:
      <KEY>: <VALUE>
  controlPlaneNetwork:
    cidr: <TKG-Cluster-VIP-CIDR>
    Name: <TKG-Cluster-VIP-Network>
  controller: <NSX ALB CONTROLLER IP/FQDN>
  dataNetwork:
    cidr: <TKG-Mgmt-Data-VIP-CIDR>
    name: <TKG-Mgmt-Data-VIP-Name>
  extraConfigs:
   cniPlugin: antrea
   disableStaticRouteSync: true
   ingress:
      defaultIngressController: false
      disableIngressClass: true
      nodeNetworkList:
      - networkName: <TKG-Mgmt-Network>     
  serviceEngineGroup: <Mgmt-Cluster-SEG>

The sample AKODeploymentConfig with sample values in place is as follows. You should add the respective avi label type=shared-services while deploying shared services cluster to enforce this network configuration.

  • cloud: ​tanzu-vcenter-01​
  • service engine group: tanzu-mgmt-segroup-01
  • Control Plane network: tkg_cluster_vip_pg
  • VIP/data network: tkg_mgmt_vip_pg
  • Node Network: tkg_mgmt_pg

apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
kind: AKODeploymentConfig
metadata:
  finalizers:
  - ako-operator.networking.tkg.tanzu.vmware.com
  generation: 2
  labels:
  name: tanzu-ako-for-shared
spec:
  adminCredentialRef:
    name: avi-controller-credentials
    namespace: tkg-system-networking
  certificateAuthorityRef:
    name: avi-controller-ca
    namespace: tkg-system-networking
  cloudName: tanzu-vcenter01
  clusterSelector:
    matchLabels:
      type: shared-services
  controlPlaneNetwork:
    cidr: 172.16.80.0/24
    name: tkg_cluster_vip_pg
  controller: alb-ha.lab.vmw
  dataNetwork:
    cidr: 172.16.50.0/24
    name: tkg_mgmt_vip_pg
  extraConfigs:
    cniPlugin: antrea
    disableStaticRouteSync: true
    ingress:
      defaultIngressController: false
      disableIngressClass: true
      nodeNetworkList:
      - networkName: tkg_mgmt_pg
  serviceEngineGroup: tanzu-mgmt-segroup-01

After you have the AKO configuration file ready, use the kubectl command to set the context to Tanzu Kubernetes Grid management cluster and create the ADC:

# kubectl config use-context tkg160-mgmt-airgap-admin@tkg160-mgmt-airgap
Switched to context "tkg160-mgmt-airgap-admin@tkg160-mgmt-airgap".


# kubectl apply -f ako-shared-services.yaml
akodeploymentconfig.networking.tkg.tanzu.vmware.com/tanzu-ako-for-shared created

Use the following command to list all AKODeploymentConfigs created under the management cluster:

# kubectl get adc
NAME                                 AGE
install-ako-for-all                  16m
install-ako-for-management-cluster   16m
tanzu-ako-for-shared                 13s

Configure AKODeploymentConfig (ADC) for Workload Cluster to Enable NSX ALB L7 Ingress with NodePortLocal Mode

VMware recommends using NSX Advanced Load Balancer L7 ingress with NodePortLocal mode for the L7 application load balancing. This is enabled by creating a custom ADC with ingress settings enabled, and then applying the AVI_LABEL while deploying the workload cluster.

As per the defined architecture, workload cluster cluster control plane endpoint uses TKG Cluster VIP Network, application loadbalancing uses TKG Workload Data VIP network and the virtual services are deployed in tanzu-wkld-segroup-01 SE group.

Below are the changes in ADC Ingress section when compare to the default ADC.

  • disableIngressClass: set to false to enable NSX ALB L7 Ingress.

  • nodeNetworkList: Provide the values for TKG workload network name and CIDR.

  • serviceType: L7 Ingress type, recommended to use NodePortLocal

  • shardVSSize: Virtual service size

The format of the AKODeploymentConfig YAML file for enabling NSX ALB L7 Ingress is as follows.

apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
kind: AKODeploymentConfig
metadata:
  name: <unique-name-for-adc>
spec:
  adminCredentialRef:
    name: avi-controller-credentials
    namespace: tkg-system-networking
  certificateAuthorityRef:
    name: avi-controller-ca
    namespace: tkg-system-networking
  cloudName: <cloud name configured in nsx alb>
  clusterSelector:
    matchLabels:
      <KEY>: <value>
  controller: <ALB-Controller-IP/FQDN>
  controlPlaneNetwork:
    cidr: <TKG-Cluster-VIP-Network-CIDR>
    name: <TKG-Cluster-VIP-Network-CIDR>
  dataNetwork:
    cidr: <TKG-Workload-VIP-network-CIDR>
    name: <TKG-Workload-VIP-network-CIDR>
  extraConfigs:
    cniPlugin: antrea
    disableStaticRouteSync: false                               # required
    ingress:
      disableIngressClass: false                                # required
      nodeNetworkList:                                          # required
        - networkName: <TKG-Workload-Network>
          cidrs:
            - <TKG-Workload-Network-CIDR>
      serviceType: NodePortLocal                                # required
      shardVSSize: MEDIUM                                       # required
  serviceEngineGroup: <Workload-Cluster-SEG>


The AKODeploymentConfig with sample values in place is as follows. You should add the respective avi label workload-l7-enabled=true while deploying shared services cluster to enforce this network configuration.

  • cloud: ​tanzu-vcenter-01​
  • service engine group: tanzu-wkld-segroup-01
  • Control Plane network: tkg_cluster_vip_pg
  • VIP/data network: tkg_workload_vip_pg
  • Node Network: tkg_workload_pg

apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
kind: AKODeploymentConfig
metadata:
  name: tanzu-ako-for-workload-l7-ingress
spec:
  adminCredentialRef:
    name: avi-controller-credentials
    namespace: tkg-system-networking
  certificateAuthorityRef:
    name: avi-controller-ca
    namespace: tkg-system-networking
  cloudName: tanzu-vcenter01
  clusterSelector:
    matchLabels:
      workload-l7-enabled: "true"
  controller: 172.16.10.10
  controlPlaneNetwork:
    cidr: 172.16.80.0/24
    name: tkg_cluster_vip_pg
  dataNetwork:
    cidr: 172.16.70.0/24
    name: tkg_workload_vip_pg
  extraConfigs:
    cniPlugin: antrea
    disableStaticRouteSync: false                               # required
    ingress:
      disableIngressClass: false                                # required
      nodeNetworkList:                                          # required
        - networkName: tkg_workload_pg
          cidrs:
            - 172.16.60.0/24
      serviceType: NodePortLocal                                # required
      shardVSSize: MEDIUM                                       # required
  serviceEngineGroup: tanzu-wkld-segroup-01

Use the kubectl command to set the context to Tanzu Kubernetes Grid management cluster and create the ADC:

# kubectl config use-context tkg160-mgmt-airgap-admin@tkg160-mgmt-airgap
Switched to context "tkg160-mgmt-airgap-admin@tkg160-mgmt-airgap".


# kubectl apply -f workload-adc-l7.yaml
akodeploymentconfig.networking.tkg.tanzu.vmware.com/tanzu-ako-for-workload-l7-ingress created

Use the following command to list all AKODeploymentConfig created under the management cluster:

# kubectl get adc
NAME                                 AGE
install-ako-for-all                  17m
install-ako-for-management-cluster   17m
tanzu-ako-for-shared                 79s
tanzu-ako-for-workload-l7-ingress    12s

Now that you have successfully created the AKO deployment config, you need to apply the cluster labels while deploying the workload clusters to enable NSX Advanced Load Balancer L7 Ingress with NodePortLocal mode.

Deploy Tanzu Kubernetes Grid Shared Services Cluster

Each Tanzu Kubernetes Grid instance can have only one shared services cluster. Create a shared services cluster if you intend to deploy Harbor.

The procedures for deploying a shared services cluster and workload cluster are almost the same. A key difference is that for the shared service cluster you add the tanzu-services label to the shared services cluster, as its cluster role. This label identifies the shared services cluster to the management cluster and workload clusters.

Shared services cluster uses the custom ADC tanzu-ako-for-shared created earlier to apply the network settings similar to the management cluster. This is enforced by applying the AVI_LABEL type:shared-services while deploying the shared services cluster.

Deployment of the shared services cluster is done by creating a YAML file and invoking the tanzu cluster create -f <file-name> command. The YAML file used for shared services deployment is usually a bit smaller than the YAML used for the management cluster deployment because you don’t need to define the AVI fields except AVI_CONTROL_PLANE_HA_PROVIDER & AVI_LABELS in the YAML.

The following is a sample YAML for deploying a shared services cluster:

CLUSTER_NAME: tkg160-shared-services-airgap
CLUSTER_PLAN: prod
INFRASTRUCTURE_PROVIDER: vsphere
ENABLE_CEIP_PARTICIPATION: "true"
ENABLE_AUDIT_LOGGING: "true"
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13
VSPHERE_SERVER: vcenter.lab.vmw
VSPHERE_USERNAME: [email protected]
VSPHERE_PASSWORD: <encoded:Vk13YXJlMSE=>
VSPHERE_DATACENTER: /tkgm-internet-dc1
VSPHERE_RESOURCE_POOL: /tkgm-internet-dc1/host/tkgm-internet-c1/Resources/tkg-sharedsvc-components
VSPHERE_DATASTORE: /tkgm-internet-dc1/datastore/vsanDatastore
VSPHERE_FOLDER: /tkgm-internet-dc1/vm/tkg-sharedsvc-components
VSPHERE_NETWORK: /tkgm-internet-dc1/network/tkg_mgmt_pg
VSPHERE_CONTROL_PLANE_ENDPOINT: #Leave blank as VIP network is configured in NSX ALB and IPAM is configured with VIP network
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAA[...]== [email protected]
VSPHERE_TLS_THUMBPRINT: DC:FA:81:1D:CA:08:21:AB:4E:15:BD:2B:AE:12:2C:6B:CA:65:49:B8
VSPHERE_INSECURE: "false"
OS_NAME: photon
OS_VERSION: "3"
OS_ARCH: amd64
VSPHERE_CONTROL_PLANE_NUM_CPUS: 2
VSPHERE_CONTROL_PLANE_DISK_GIB: 40
VSPHERE_CONTROL_PLANE_MEM_MIB: 8192
VSPHERE_WORKER_NUM_CPUS: 2
VSPHERE_WORKER_DISK_GIB: 40
VSPHERE_WORKER_MEM_MIB: 8192
AVI_CONTROL_PLANE_HA_PROVIDER: "true"
AVI_LABELS: |
    'type': 'shared-services'
TKG_CUSTOM_IMAGE_REPOSITORY: "harbor-sa.lab.vmw/tkg-160"
TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: false
TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: LS0t[...]tLS0tLQ==
ENABLE_MHC: true
IDENTITY_MANAGEMENT_TYPE: "none"

Cluster creation takes approximately 15-20 minutes to complete. Verify the health of the cluster and validate the cluster labels applied.

  1. After the cluster deployment completes, connect to the Tanzu Management Cluster context and verify the cluster labels.

    ## Connect to tkg management cluster
    
    # kubectl config use-context tkg160-mgmt-airgap-admin@tkg160-mgmt-airgap
    Switched to context "tkg160-mgmt-airgap-admin@tkg160-mgmt-airgap".
    
    ## verify the shared service cluster creation
    
    # tanzu cluster list
    NAME                           NAMESPACE  STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES   PLAN  TKR
    tkg160-shared-services-airgap  default    running  3/3           3/3      v1.23.8+vmware.2  <none>  prod  v1.23.8---vmware.2-tkg.1
    
    ## Add the tanzu-services label to the shared services cluster as its cluster role. In the following command "tkg160-shared-services-airgap” is the name of the shared service cluster
    
    # kubectl label cluster.cluster.x-k8s.io/tkg160-shared-services-airgap cluster-role.tkg.tanzu.vmware.com/tanzu-services="" --overwrite=true
    cluster.cluster.x-k8s.io/tkg160-shared-services-airgap labeled
    
    
    ## Validate that the AVI_LABEL has been applied and cluster is using the tanzu-ako-for-shared ADC.
    
    # kubectl get cluster tkg160-shared-services-airgap --show-labels
    NAME                            PHASE         AGE   VERSION   LABELS
    tkg160-shared-services-airgap   Provisioned   49m             cluster-role.tkg.tanzu.vmware.com/tanzu-services=,networking.tkg.tanzu.vmware.com/avi=tanzu-ako-for-shared,tanzuKubernetesRelease=v1.23.8---vmware.2-tkg.1,tkg.tanzu.vmware.com/cluster-name=tkg160-shared-services-airgap,type=shared-services
    

  2. Connect to the admin context of the shared service cluster using the following commands and verify the ako pod status.

    ## Use the following command to get the admin context of Shared Service Cluster. In the following command tkg-160-shared-svc is the name of the shared service cluster
    
    # tanzu cluster kubeconfig get tkg160-shared-services-airgap --admin
    Credentials of cluster 'tkg160-shared-services-airgap' have been saved
    You can now access the cluster by running 'kubectl config use-context tkg160-shared-services-airgap-admin@tkg160-shared-services-airgap'
    
    
    
    ## Use the following command to use the context of Shared Service Cluster
    
    # kubectl config use-context tkg160-shared-services-airgap-admin@tkg160-shared-services-airgap
    Switched to context "tkg160-shared-services-airgap-admin@tkg160-shared-services-airgap".
    
    
    # Verify that ako pod gets deployed in avi-system namespace
    
    kubectl get pods -n avi-system
    NAME    READY   STATUS    RESTARTS   AGE
    ako-0   1/1     Running   0          8m55s
    

The shared services cluster is now successfully created, and you can proceed to deploying the workload clusters.

Deploy Tanzu Kubernetes Grid Workload Cluster

Deployment of the workload cluster** is done using a YAML similar to the shared services cluster YAML but customized for the workload cluster placement objects.

The following is a sample YAML for deploying the workload cluster.

CLUSTER_NAME: tkg160-workload-l7-airgap
CLUSTER_PLAN: prod
INFRASTRUCTURE_PROVIDER: vsphere
ENABLE_CEIP_PARTICIPATION: "true"
ENABLE_AUDIT_LOGGING: "true"
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13
VSPHERE_SERVER: vcenter.lab.vmw
VSPHERE_USERNAME: [email protected]
VSPHERE_PASSWORD: <encoded:Vk13YXJlMSE=>
VSPHERE_DATACENTER: /tkgm-internet-dc1
VSPHERE_RESOURCE_POOL: /tkgm-internet-dc1/host/tkgm-internet-c1/Resources/tkg-workload01-components
VSPHERE_DATASTORE: /tkgm-internet-dc1/datastore/vsanDatastore
VSPHERE_FOLDER: /tkgm-internet-dc1/vm/tkg-workload01-components
VSPHERE_NETWORK: /tkgm-internet-dc1/network/tkg_workload_pg
VSPHERE_CONTROL_PLANE_ENDPOINT: #Leave blank as VIP network is configured in NSX ALB and IPAM is configured with VIP network
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAA[...]== [email protected]
VSPHERE_TLS_THUMBPRINT: DC:FA:81:1D:CA:08:21:AB:4E:15:BD:2B:AE:12:2C:6B:CA:65:49:B8
VSPHERE_INSECURE: "false"
OS_NAME: photon
OS_VERSION: "3"
OS_ARCH: amd64
VSPHERE_CONTROL_PLANE_NUM_CPUS: 2
VSPHERE_CONTROL_PLANE_DISK_GIB: 40
VSPHERE_CONTROL_PLANE_MEM_MIB: 8192
VSPHERE_WORKER_NUM_CPUS: 2
VSPHERE_WORKER_DISK_GIB: 40
VSPHERE_WORKER_MEM_MIB: 8192
AVI_CONTROL_PLANE_HA_PROVIDER: "true"
AVI_LABELS: |
    'workload-l7-enabled': 'true'
TKG_CUSTOM_IMAGE_REPOSITORY: "harbor-sa.lab.vmw/tkg-160"
TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: false
TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: LS0t[...]tLS0tLQ==
ENABLE_MHC: true
IDENTITY_MANAGEMENT_TYPE: "none"

Cluster creation roughly takes 15-20 minutes to complete. Verify the health of the cluster and apply the labels.

  1. Connect to the Tanzu Management Cluster context and verify the cluster labels for the workload cluster.

    ## Connect to tkg management cluster
    
    # kubectl config use-context tkg160-mgmt-airgap-admin@tkg160-mgmt-airgap
    Switched to context "tkg160-mgmt-airgap-admin@tkg160-mgmt-airgap".
    
    
    ## verify the workload cluster creation
    
    # tanzu cluster list
    NAME                           NAMESPACE  STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES   PLAN  TKR
    tkg160-shared-services-airgap  default    running  3/3           3/3      v1.23.8+vmware.2  <none>  prod  v1.23.8---vmware.2-tkg.1
    tkg160-workload-l7-airgap      default    running  3/3           3/3      v1.23.8+vmware.2  <none>  prod  v1.23.8---vmware.2-tkg.1
    
    
    ## Validate the cluster labels applied and AKO Deployment Config (ADC) used by the workload cluster
    
    # kubectl get cluster tkg160-workload-l7-airgap --show-labels
    NAME                        PHASE         AGE   VERSION   LABELS
    tkg160-workload-l7-airgap   Provisioned   66m             networking.tkg.tanzu.vmware.com/avi=tanzu-ako-for-workload-l7-ingress,tanzuKubernetesRelease=v1.23.8---vmware.2-tkg.1,tkg.tanzu.vmware.com/cluster-name=tkg160-workload-l7-airgap,workload-l7-enabled=true
    
    

  2. Connect to admin context of the workload cluster using the following commands and validate the ako pod status.

    ## Use the following command to get the admin context of workload Cluster.
    
    # tanzu cluster kubeconfig get tkg160-workload-l7-airgap --admin
    Credentials of cluster 'tkg160-workload-l7-airgap' have been saved
    You can now access the cluster by running 'kubectl config use-context tkg160-workload-l7-airgap-admin@tkg160-workload-l7-airgap'
    
    
    ## Use the following command to use the context of workload Cluster
    
    # kubectl config use-context tkg160-workload-l7-airgap-admin@tkg160-workload-l7-airgap
    Switched to context "tkg160-workload-l7-airgap-admin@tkg160-workload-l7-airgap".
    
    
    # Verify that ako pod gets deployed in avi-system namespace
    
    # kubectl get pods -n avi-system
    NAME    READY   STATUS    RESTARTS      AGE
    ako-0   1/1     Running   0             40m
    
    
    # verify the nodes and pods status by running the command:
    kubectl get nodes -o wide
    
    kubectl get pods -A 
    

You can see that the workload cluster is successfully deployed and the AKO pod is deployed on the cluster. You can now deploy user-managed packages on this cluster.

Deploy User-Managed Packages

User-managed packages are installed after workload cluster creation. These packages extend the core functionality of Kubernetes clusters created by Tanzu Kubernetes Grid.

Tanzu Kubernetes Grid includes the following user-managed packages. These packages provide in-cluster and shared services to the Kubernetes clusters that are running in your Tanzu Kubernetes Grid environment.

Function Package Location
Certificate Management Cert Manager Workload and shared services cluster
Container networking Multus Workload cluster
Container registry Harbor Shared services cluster
Ingress control Contour Workload and shared services cluster
Log forwarding Fluent Bit Workload cluster
Monitoring Grafana
Prometheus
Workload cluster

User-managed packages can be installed via CLI by invoking the tanzu package install command. Before installing the user-managed packages, ensure that you have switched to the context of the cluster where you want to install the packages.

Also, ensure that the tanzu-standard repository is configured on the cluster where you want to install the packages. By default, the newly deployed clusters should have the tanzu-standard repository configured.

You can run the command tanzu package repository list -n tanzu-package-repo-global to verify this. Also, ensure that the repository status is Reconcile succeeded.

]# tanzu package repository list -n tanzu-package-repo-global

NAME            REPOSITORY                                        TAG     STATUS               DETAILS
tanzu-standard  harbor-sa.lab.vmw/tkg-160/packages/standard/repo  v1.6.0  Reconcile succeeded

Install Cert Manager

The first package that you should install on your cluster is the cert-manager package which adds certificates and certificate issuers as resource types in Kubernetes clusters and simplifies the process of obtaining, renewing and using those certificates.

  1. Capture the available Cert Manager package versions.

    # tanzu package available list cert-manager.tanzu.vmware.com -n tanzu-package-repo-global
    
    NAME                           VERSION               RELEASED-AT
    cert-manager.tanzu.vmware.com  1.1.0+vmware.1-tkg.2  2020-11-24 18:00:00 +0000 UTC
    cert-manager.tanzu.vmware.com  1.1.0+vmware.2-tkg.1  2020-11-24 18:00:00 +0000 UTC
    cert-manager.tanzu.vmware.com  1.5.3+vmware.2-tkg.1  2021-08-23 17:22:51 +0000 UTC
    cert-manager.tanzu.vmware.com  1.5.3+vmware.4-tkg.1  2021-08-23 17:22:51 +0000 UTC
    cert-manager.tanzu.vmware.com  1.7.2+vmware.1-tkg.1  2021-10-29 12:00:00 +0000 UTC
    
  2. Install the cert-manager package.

    Capture the latest version from the previous command, if there are multiple versions available check the “RELEASED-AT” to collect the version of the latest one. This document make use of version 1.7.2+vmware.1-tkg.1 for installation.

    The following command installs the cert-manager package:

    tanzu package install cert-manager --package-name cert-manager.tanzu.vmware.com --namespace package-cert-manager --version <AVAILABLE-PACKAGE-VERSION> --create-namespace
    
    ]# tanzu package install cert-manager --package-name cert-manager.tanzu.vmware.com --namespace cert-manager --version 1.7.2+vmware.1-tkg.1 --create-namespace
    Installing package 'cert-manager.tanzu.vmware.com'
    Creating namespace 'cert-manager'
    Getting package metadata for 'cert-manager.tanzu.vmware.com'
    Creating service account 'cert-manager-cert-manager-sa'
    Creating cluster admin role 'cert-manager-cert-manager-cluster-role'
    Creating cluster role binding 'cert-manager-cert-manager-cluster-rolebinding'
    Creating package resource
    Waiting for 'PackageInstall' reconciliation for 'cert-manager'
    'PackageInstall' resource install status: Reconciling
    'PackageInstall' resource install status: ReconcileSucceeded
    'PackageInstall' resource successfully reconciled
    
    Added installed package 'cert-manager'
    
  3. Confirm that the cert-manager package has been installed successfully and the status is Reconcile succeeded.

    ]# tanzu package installed get cert-manager -n cert-manager
    NAME:                    cert-manager
    PACKAGE-NAME:            cert-manager.tanzu.vmware.com
    PACKAGE-VERSION:         1.7.2+vmware.1-tkg.1
    STATUS:                  Reconcile succeeded
    CONDITIONS:              [{ReconcileSucceeded True  }]
    USEFUL-ERROR-MESSAGE:
    

Install Contour

Contour is an open-source Kubernetes ingress controller providing the control plane for the Envoy edge and service proxy. Tanzu Kubernetes Grid includes signed binaries for Contour and Envoy, which you can deploy into workload clusters to provide ingress control services in those clusters.

After you have set up the cluster, you must first create the configuration file that is used when you install the Contour package and then install the package.

Package installation can be customized by entering the user-configurable values in YAML format. Following is an example YAML for customizing Contour installation.

---
infrastructure_provider: vsphere
namespace: tanzu-system-ingress
contour:
 configFileContents: {}
 useProxyProtocol: false
 replicas: 2
 pspNames: "vmware-system-restricted"
 logLevel: info
envoy:
 service:
   type: LoadBalancer
   annotations: {}
   nodePorts:
     http: null
     https: null
   externalTrafficPolicy: Cluster
   disableWait: false
 hostPorts:
   enable: true
   http: 80
   https: 443
 hostNetwork: false
 terminationGracePeriodSeconds: 300
 logLevel: info
 pspNames: null
certificates:
 duration: 8760h
 renewBefore: 360h

For a full list of user-configurable values, see Configure the Contour Extension

  1. Capture the available Contour package versions.

    # tanzu package available list contour.tanzu.vmware.com -n tanzu-package-repo-global
    
    NAME                      VERSION                RELEASED-AT
    contour.tanzu.vmware.com  1.17.1+vmware.1-tkg.1  2021-07-23 18:00:00 +0000 UTC
    contour.tanzu.vmware.com  1.17.2+vmware.1-tkg.2  2021-07-23 18:00:00 +0000 UTC
    contour.tanzu.vmware.com  1.17.2+vmware.1-tkg.3  2021-07-23 18:00:00 +0000 UTC
    contour.tanzu.vmware.com  1.18.2+vmware.1-tkg.1  2021-10-05 00:00:00 +0000 UTC
    contour.tanzu.vmware.com  1.20.2+vmware.1-tkg.1  2022-06-14 00:00:00 +0000 UTC
    

    Capture the latest version from the previous command. If there are multiple versions available check the “RELEASED-AT” to collect the version of the latest one. This document make use of version 1.20.2+vmware.1-tkg.1 for installation.

  2. Install the Contour package.

    tanzu package install contour --package-name contour.tanzu.vmware.com --version <AVAILABLE-PACKAGE-VERSION> --values-file <Path_to_contour-data-values.yaml_file> --namespace tanzu-system-contour --create-namespace
    
    # tanzu package install contour --package-name contour.tanzu.vmware.com --version 1.20.2+vmware.1-tkg.1 --values-file ./contour-data-values.yaml --namespace tanzu-system-ingress --create-namespace
    
    Installing package 'contour.tanzu.vmware.com'
    Creating namespace 'tanzu-system-ingress'
    Getting package metadata for 'contour.tanzu.vmware.com'
    Creating service account 'contour-tanzu-system-ingress-sa'
    Creating cluster admin role 'contour-tanzu-system-ingress-cluster-role'
    Creating cluster role binding 'contour-tanzu-system-ingress-cluster-rolebinding'
    Creating secret 'contour-tanzu-system-ingress-values'
    Creating package resource
    Waiting for 'PackageInstall' reconciliation for 'contour'
    'PackageInstall' resource install status: Reconciling
    'PackageInstall' resource install status: ReconcileSucceeded
    
    Added installed package 'contour'
    
  3. Confirm that the Contour package has been installed and the status is Reconcile succeeded.

    # tanzu package installed get contour --namespace tanzu-system-ingress
    
    NAME:                    contour
    PACKAGE-NAME:            contour.tanzu.vmware.com
    PACKAGE-VERSION:         1.20.2+vmware.1-tkg.1
    STATUS:                  Reconcile succeeded
    CONDITIONS:              [{ReconcileSucceeded True  }]
    USEFUL-ERROR-MESSAGE:
    

Install Harbor

Harbor is an open-source container registry. Harbor Registry may be used as a private registry for container images that you want to deploy to Tanzu Kubernetes clusters.

Tanzu Kubernetes Grid includes signed binaries for Harbor, which you can deploy into:

  • A workload cluster to provide container registry services for that clusters
  • A shared services cluster to provide container registry services for other Tanzu Kubernetes (workload) clusters.

When deployed as a shared service, Harbor is available to all of the workload clusters in a given Tanzu Kubernetes Grid instance.

Follow this procedure to deploy Harbor into a workload cluster or a shared services cluster.

  1. Confirm that the Harbor package is available in the cluster and retrieve the version of the available package.

    # tanzu package available list harbor.tanzu.vmware.com -A
    
    - Retrieving package versions for harbor.tanzu.vmware.com...
    
    NAME                     VERSION               RELEASED-AT                    NAMESPACE
    harbor.tanzu.vmware.com  2.2.3+vmware.1-tkg.1  2021-07-07 18:00:00 +0000 UTC  tanzu-package-repo-global
    harbor.tanzu.vmware.com  2.2.3+vmware.1-tkg.2  2021-07-07 18:00:00 +0000 UTC  tanzu-package-repo-global
    harbor.tanzu.vmware.com  2.3.3+vmware.1-tkg.1  2021-09-28 06:05:00 +0000 UTC  tanzu-package-repo-global
    harbor.tanzu.vmware.com  2.5.3+vmware.1-tkg.1  2021-09-28 06:05:00 +0000 UTC  tanzu-package-repo-global
    
  2. Create a configuration file named harbor-data-values.yaml by executing the following commands:

    image_url=$(kubectl -n tanzu-package-repo-global get packages harbor.tanzu.vmware.com.2.5.3+vmware.1-tkg.1 -o jsonpath='{.spec.template.spec.fetch[0].imgpkgBundle.image}')
    
    imgpkg pull -b $image_url -o /tmp/harbor-package
    
    cp /tmp/harbor-package/config/values.yaml harbor-data-values.yaml
    
  3. Set the mandatory passwords and secrets in the harbor-data-values.yaml file

    bash /tmp/harbor-package/config/scripts/generate-passwords.sh harbor-data-values.yaml
    
  4. Edit the harbor-data-values.yaml file and configure the values for the following mandatory parameters.

    • namespace
    • port
    • harborAdminPassword
    • secretKey

    You can also change the values for other parameters to meet the requirements for your deployment. For the full list of the user-configurable values, see Deploy Harbor into a Cluster.

  5. Remove the comments in the harbor-data-values.yaml file

    yq -i eval '... comments=""' harbor-data-values.yaml
    
  6. Install the Harbor package by executing the following command:

    # tanzu package install harbor --package-name harbor.tanzu.vmware.com --version 2.5.3+vmware.1-tkg.1 --values-file ./harbor-data-values.yaml --namespace tanzu-system-registry --create-namespace
    
    Installing package 'harbor.tanzu.vmware.com'
    Creating namespace 'tanzu-system-registry'
    Getting package metadata for 'harbor.tanzu.vmware.com'
    Creating service account 'harbor-tanzu-system-registry-sa'
    Creating cluster admin role 'harbor-tanzu-system-registry-cluster-role'
    Creating cluster role binding 'harbor-tanzu-system-registry-cluster-rolebinding'
    Creating secret 'harbor-tanzu-system-registry-values'
    Creating package resource
    Waiting for 'PackageInstall' reconciliation for 'harbor'
    'PackageInstall' resource install status: Reconciling
    'PackageInstall' resource install status: ReconcileSucceeded
    'PackageInstall' resource successfully reconciled
    
    Added installed package 'harbor'
    
  7. Confirm that the Harbor package has been installed and the status is Reconcile succeeded.

    # tanzu package installed get harbor --namespace tanzu-system-registry
    
    NAME:                    harbor
    PACKAGE-NAME:            harbor.tanzu.vmware.com
    PACKAGE-VERSION:         2.5.3+vmware.1-tkg.1
    STATUS:                  Reconcile succeeded
    CONDITIONS:              [{ReconcileSucceeded True  }]
    USEFUL-ERROR-MESSAGE:
    

Install Prometheus

Prometheus is a system and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true. Alertmanager handles alerts generated by Prometheus and routes them to their receiving endpoints.

Do the following to deploy Prometheus into a workload cluster:

  1. Capture the available Prometheus version.

    # tanzu package available list prometheus.tanzu.vmware.com -n tanzu-package-repo-global
    
    NAME                         VERSION                RELEASED-AT
    prometheus.tanzu.vmware.com  2.27.0+vmware.1-tkg.1  2021-05-12 18:00:00 +0000 UTC
    prometheus.tanzu.vmware.com  2.27.0+vmware.2-tkg.1  2021-05-12 18:00:00 +0000 UTC
    prometheus.tanzu.vmware.com  2.36.2+vmware.1-tkg.1  2022-06-23 18:00:00 +0000 UTC
    

    Capture the latest version from the previous command. If there are multiple versions available check the “RELEASED-AT” to collect the version of the latest one. This document make use of version 2.36.2+vmware.1-tkg.1 for installation.

  2. Retrieve the template of the Prometheus package’s default configuration:

    image_url=$(kubectl -n tanzu-package-repo-global get packages prometheus.tanzu.vmware.com.2.36.2+vmware.1-tkg.1 -o jsonpath='{.spec.template.spec.fetch[0].imgpkgBundle.image}')
    
    imgpkg pull -b $image_url -o /tmp/prometheus-package-2.36.2+vmware.1-tkg.1
    
    cp /tmp/prometheus-package-2.36.2+vmware.1-tkg.1/config/values.yaml prometheus-data-values.yaml
    

    This creates a configuration file named prometheus-data-values.yaml that you can modify.

  3. To customize the Prometheus installation, modify the following values.

    Key Default Value Modified value
    Ingress.tlsCertificate.tls.crt Null

    Note: This is optional.

    ingress.tlsCertificate.tls.key Null

    <Cert Key provided in Input file

    Note: This is optional.

    ingress.enabled false true
    ingress.virtual_host_fqdn prometheus.system.tanzu prometheus.

    To see a full list of user configurable configuration parameters, see Prometheus Package Configuration Parameters.

  4. After you make any changes needed to your prometheus-data-values.yaml file, remove all comments in the file:

    yq -i eval '... comments=""' prometheus-data-values.yaml
    
  5. Install Prometheus package.

    # tanzu package install prometheus --package-name prometheus.tanzu.vmware.com --version 2.36.2+vmware.1-tkg.1 --values-file ./prometheus-data-values.yaml --namespace tanzu-system-monitoring --create-namespace
    
    Installing package 'prometheus.tanzu.vmware.com'
    Creating namespace 'tanzu-system-monitoring'
    Getting package metadata for 'prometheus.tanzu.vmware.com'
    Creating service account 'prometheus-tanzu-system-monitoring-sa'
    Creating cluster admin role 'prometheus-tanzu-system-monitoring-cluster-role'
    Creating cluster role binding 'prometheus-tanzu-system-monitoring-cluster-rolebinding'
    Creating secret 'prometheus-tanzu-system-monitoring-values'
    Creating package resource
    Waiting for 'PackageInstall' reconciliation for 'prometheus'
    'PackageInstall' resource install status: Reconciling
    'PackageInstall' resource install status: ReconcileSucceeded
    
    Added installed package 'prometheus'
    
  6. Confirm that the Prometheus package has been installed successfully and the status is Reconcile succeeded.

    # tanzu package installed get prometheus -n tanzu-system-monitoring
    
    NAME:                    prometheus
    PACKAGE-NAME:            prometheus.tanzu.vmware.com
    PACKAGE-VERSION:         2.36.2+vmware.1-tkg.1
    STATUS:                  Reconcile succeeded
    CONDITIONS:              [{ReconcileSucceeded True  }]
    USEFUL-ERROR-MESSAGE:
    

Install Grafana

Grafana allows you to query, visualize, alert on, and explore metrics no matter where they are stored. Grafana provides tools to form graphs and visualizations from application data.

Note: Grafana is configured with Prometheus as a default data source. If you have customized the Prometheus deployment namespace and it is not deployed in the default namespace, tanzu-system-monitoring, you need to change the Grafana data source configuration in the following code.

  1. Retrieve the version of the available package.

    # tanzu package available list grafana.tanzu.vmware.com -A
    
    NAME                      VERSION                RELEASED-AT                    NAMESPACE
    grafana.tanzu.vmware.com  7.5.16+vmware.1-tkg.1  2022-05-19 18:00:00 +0000 UTC  tanzu-package-repo-global
    grafana.tanzu.vmware.com  7.5.7+vmware.1-tkg.1   2021-05-19 18:00:00 +0000 UTC  tanzu-package-repo-global
    grafana.tanzu.vmware.com  7.5.7+vmware.2-tkg.1   2021-05-19 18:00:00 +0000 UTC  tanzu-package-repo-global
    

    Capture the latest version from the previous command. If there are multiple versions available check the “RELEASED-AT” to collect the version of the latest one. This document make use of version 7.5.16+vmware.1-tkg.1 for installation.

  2. Retrieve the template of the Grafana package’s default configuration.

    image_url=$(kubectl -n tanzu-package-repo-global get packages grafana.tanzu.vmware.com.7.5.16+vmware.1-tkg.1 -o jsonpath='{.spec.template.spec.fetch[0].imgpkgBundle.image}')
    
    imgpkg pull -b $image_url -o /tmp/grafana-package-7.5.16+vmware.1-tkg.1
    
    cp /tmp/grafana-package-7.5.16+vmware.1-tkg.1/config/values.yaml grafana-data-values.yaml
    

    This creates a configuration file named grafana-data-values.yaml that you can modify. For a full list of user-configurable values, see Grafana Package Configuration Parameters.

  3. Edit grafana-data-values.yaml and replace the following with your custom values.

    Key Default Value Modified value
    virtual_host_fqdn grafana.system.tanzu grafana.
    secret.admin_password Null Your password in Base64 encoded format.
  4. (Optional) Modify the Grafana data source configuration.

    Grafana is configured with Prometheus as a default data source. If you have customized the Prometheus deployment namespace and it is not deployed in the default namespace, tanzu-system-monitoring, you need to change the Grafana data source configuration in grafana-data-values.yaml.

    datasources:
            - name: Prometheus
              type: prometheus
              url: prometheus-server.<change-to-prometheus-namespace>.svc.cluster.local
    
  5. Remove all comments from grafana-data-values.yaml file

    yq -i eval '... comments=""' grafana-data-values.yaml
    
  6. Install Grafana.

    # tanzu package install grafana --package-name grafana.tanzu.vmware.com --version 7.5.16+vmware.1-tkg.1 --values-file grafana-data-values.yaml --namespace tanzu-system-dashboards --create-namespace
    
    Installing package 'grafana.tanzu.vmware.com'
    Creating namespace 'tanzu-system-dashboards'
    Getting package metadata for 'grafana.tanzu.vmware.com'
    Creating service account 'grafana-tanzu-system-dashboards-sa'
    Creating cluster admin role 'grafana-tanzu-system-dashboards-cluster-role'
    Creating cluster role binding 'grafana-tanzu-system-dashboards-cluster-rolebinding'
    Creating secret 'grafana-tanzu-system-dashboards-values'
    Creating package resource
    Waiting for 'PackageInstall' reconciliation for 'grafana'
    'PackageInstall' resource install status: Reconciling
    'PackageInstall' resource install status: ReconcileSucceeded
    'PackageInstall' resource successfully reconciled
    
    Added installed package 'grafana'
    
  7. Confirm that the Grafana package has been installed and the status is Reconcile succeeded.

    # tanzu package installed get grafana -n tanzu-system-dashboards
    
    NAME:                    grafana
    PACKAGE-NAME:            grafana.tanzu.vmware.com
    PACKAGE-VERSION:         7.5.16+vmware.1-tkg.1
    STATUS:                  Reconcile succeeded
    CONDITIONS:              [{ReconcileSucceeded True  }]
    USEFUL-ERROR-MESSAGE:
    

Install Fluent Bit

Fluent Bit is a lightweight log processor and forwarder that allows you to collect data and logs from different sources, unify them, and send them to multiple destinations.

The current release of Fluent Bit allows you to gather logs from management clusters or Tanzu Kubernetes clusters running in vSphere, Amazon EC2, and Azure. You can then forward them to a log storage provider such as Elastic Search, Kafka, Splunk, or an HTTP endpoint.

The example shown in this document uses HTTP endpoint vRealize Log Insight for forwarding logs from Tanzu Kubernetes clusters.

  1. Retrieve the version of the available package.

    # tanzu package available list fluent-bit.tanzu.vmware.com -A
    
    NAME                         VERSION                RELEASED-AT                    NAMESPACE
    fluent-bit.tanzu.vmware.com  1.7.5+vmware.1-tkg.1   2021-05-13 18:00:00 +0000 UTC  tanzu-package-repo-global
    fluent-bit.tanzu.vmware.com  1.7.5+vmware.2-tkg.1   2021-05-13 18:00:00 +0000 UTC  tanzu-package-repo-global
    fluent-bit.tanzu.vmware.com  1.8.15+vmware.1-tkg.1  2022-05-24 18:00:00 +0000 UTC  tanzu-package-repo-global
    

    Capture the latest version from the previous command. If there are multiple versions available check the “RELEASED-AT” to collect the version of the latest one. This document make use of version 1.8.15+vmware.1-tkg.1 for installation.

  2. Retrieve the template of the Fluent Bit package’s default configuration.

    image_url=$(kubectl -n tanzu-package-repo-global get packages fluent-bit.tanzu.vmware.com.1.8.15+vmware.1-tkg.1 -o jsonpath='{.spec.template.spec.fetch[0].imgpkgBundle.image}')
    
    imgpkg pull -b $image_url -o /tmp/fluent-bit-1.8.15+vmware.1-tkg.1
    
    cp /tmp/fluent-bit-1.8.15+vmware.1-tkg.1/config/values.yaml fluentbit-data-values.yaml
    
  3. Modify the resulting fluentbit-data-values.yaml file and configure the endpoint as per your choice. A sample endpoint configuration for sending logs to vRealize Log Insight Cloud over HTTP is shown in the following example.

    [OUTPUT]
            Name                 syslog
            Match                *
            Host                 vrli.lab.vmw
            Port                 514
            Mode                 udp
            Syslog_Format        rfc5424
            Syslog_Hostname_key  tkg_cluster
            Syslog_Appname_key   pod_name
            Syslog_Procid_key    container_name
            Syslog_Message_key   message
            Syslog_SD_key        k8s
            Syslog_SD_key        labels
            Syslog_SD_key        annotations
            Syslog_SD_key        tkg
    
  4. Deploy Fluent Bit.

    # tanzu package install fluent-bit --package-name fluent-bit.tanzu.vmware.com --version 1.8.15+vmware.1-tkg.1 --namespace tanzu-system-logging --create-namespace
    
    Installing package 'fluent-bit.tanzu.vmware.com'
    Creating namespace 'tanzu-system-logging'
    Getting package metadata for 'fluent-bit.tanzu.vmware.com'
    Creating service account 'fluent-bit-tanzu-system-logging-sa'
    Creating cluster admin role 'fluent-bit-tanzu-system-logging-cluster-role'
    Creating cluster role binding 'fluent-bit-tanzu-system-logging-cluster-rolebinding'
    Creating package resource
    Waiting for 'PackageInstall' reconciliation for 'fluent-bit'
    'PackageInstall' resource install status: Reconciling
    'PackageInstall' resource install status: ReconcileSucceeded
    
    Added installed package 'fluent-bit'
    
  5. Confirm that the Fluent Bit package has been installed and the status is Reconcile succeeded.

    # tanzu package installed get fluent-bit -n tanzu-system-logging
    
    NAME:                    fluent-bit
    PACKAGE-NAME:            fluent-bit.tanzu.vmware.com
    PACKAGE-VERSION:         1.8.15+vmware.1-tkg.1
    STATUS:                  Reconcile succeeded
    CONDITIONS:              [{ReconcileSucceeded True  }]
    USEFUL-ERROR-MESSAGE:
    
check-circle-line exclamation-circle-line close-line
Scroll to top icon