Deploy Tanzu for Kubernetes Operations on VMware Cloud on AWS

This document provides step-by-step instructions for deploying Tanzu Kubernetes Operations on VMware Cloud on AWS.

The scope of the document is limited to providing the deployment steps based on the reference design in VMware Tanzu Standard on VMware Cloud on AWS Reference Design


The instructions provided in this document assumes that you have the following setup:

  • VMware Cloud subscription
  • SDDC deployment
  • Access to vCenter over HTTPs
  • NTP configured on all ESXi hosts and vCenter server


The following are the high-level steps for deploying Tanzu Standard on VMware Cloud on AWS:

  1. Create and Configure Network Segment.
  2. Create Inventory Groups and Firewall Configuration.
  3. Request Public IP for Tanzu Kubernetes Nodes.
  4. Configure Resource pools and VM Folders in vCenter.
  5. Deploy and Configure NSX Advanced Load Balancer.
  6. Configure Bootstrap Environment.
  7. Deploy Management Cluster.
  8. Set up a Shared Services Workload Cluster.

Create and Configure Network Segment

  1. In the VMware Cloud Console, open the SDDC pane and click Networking & Security > Network > Segments.

  2. Click Add Segment to create a new network segment with a unique subnet for Tanzu Kubernetes Grid management network.

    • For this deployment, we will create a new segment for each one of the following: management cluster, workload cluster, and NSX Advanced Load Balancer.
    • Ensure that the new subnet CIDR does not overlap with sddc-cgw-network-1 or any other existing segments.
    • The bootstrap VM and the Tanzu Kubernetes Grid Management cluster nodes will be attached to this segment.
    • For network isolation, we recommend creating new segments for each workload cluster.

      Configuration for TKG Management Cluster Values
      Type Routed
      Segment-name m01tkg01-seg01
      Configuration for TKG Workload Cluster Values
      Segment-name w01tkg01-seg01
      Type Routed
      Configuration for NSX ALB Management Network Values
      Segment-name m01avimgmt-seg01
      Type Routed

  3. For the management and workload cluster segments, click Edit DHCP Config. A Set DHCP Config pane appears.

  4. In the Set DHCP Config pane:

    • Set DHCP Config to Enabled.
    • Set DHCP Ranges to an IP address range or CIDR within the segment’s subnet, which leaves a pool of addresses free to serve as static IP addresses for Tanzu Kubernetes clusters. Each management cluster and workload cluster that Tanzu Kubernetes Grid creates will require a unique static IP address from this pool.

    For this deployment set the DHCP Range as - The available static IPs will be in the range of - We will use the first IP for Cluster IP addressing and the rest for the NSX Advanced Load Balancer VIP network.

    • Set the DNS Server details. For this deployment, set

    The following show the DHCP configuration for a management and a workload cluster.

Create Inventory Groups and Firewall Configuration

Set up the following firewall rules. You will first create management and compute inventory groups. Then, you will configure the firewall rules for the inventory groups.

Source Destination Protocol and Port Description Configured on
TKG Management and Workload Network DNS Server UDP:53 Name Resolution Compute Gateway
TKG Management and Workload Network NTP Server UDP:123 Time Synchronization Compute Gateway
TKG Management and Workload Network vCenter Server TCP:443 To access vCenter create VMs and Storage Volumes Compute and Management Gateway
TKG Management and Workload Network Internet TCP:443 Allow components to retrieve container images required for cluster building from repos listed under ~/.tanzu/tkg/bom/ Compute Gateway
TKG Management Cluster Network TKG Workload Cluster HAProxy TCP:6443, 5556 Allow management cluster to configure workload cluster Compute Gateway
TKGWorkload Cluster Network TKG Management Cluster HAProxy TCP 6443 Allow Workload cluster to register with management cluster Compute Gateway
AVI Management Network vCenter Server TCP 443 Allow AVI to read vCenter and PG information Compute and Management Gateway
TKG Management and Workload Network AVI Management Network TCP 443 Allow TKG clusters to communicate with AVI for LB and Ingress Configuration Compute Gateway

  1. Create and configure the following inventory groups in Networking & Security > Inventory > Groups > Compute Groups.

    Group Name Members
    TKG_Management_Network IP range of the TKG Management Cluster
    TKG_Workload_Networks IP range of the TKG workload Cluster
    TKG_Management_ControlPlaneIPs IP address of the TKG Management Control Plane
    TKG_Workload_ControlPlaneIPs IP address of the TKG Workload Control Plane
    AVI_Management_Network IP range of the AVI Management Cluster
    vCenter_IP IP of the Management vCenter
    DNS_IPs IPs of the DNS server
    NTP_IPs IPs of the NTP server

  2. Create and configure the following inventory groups in Networking & Security > Inventory > Groups > Management Groups.

    Note: Because a vCenter group is already created by the system, we do not need to create a separate group for vCenter.

    Group Name Members
    TKG_Workload_Networks IP range of the TKG workload Cluster
    TKG_Management_Network IP range of the TKG Management Cluster
    AVI_Management_Network IP range of the AVI Management Cluster

  3. Create the following firewall rules in Networking & Security > Security > Gateway Firewall > Compute Groups.

    Rule Name Source Group Name Destination Group Name Protocol and Port
    TKG_AVI_to_DNS TKG_Management_Network TKG_Workload_Networks AVI_Management_Network DNS_IPs UDP:53
    TKG_AVI_to_NTP TKG_Management_Network TKG_Workload_Networks AVI_Management_Network NTP_IPs UDP:123
    TKG_AVI_to_vCenter TKG_Management_Network TKG_Workload_Networks AVI_Management_Network vCenter_IP TCP:443
    TKG_to_Internet TKG_Management_Network TKG_Workload_Networks image repositories listed under ~/.tanzu/tkg/bom/ TCP:443
    TKGMgmt_to_TKGWorkloadVIP TKG_Management_Network TKG_Workload_ControlPlaneIPs TCP:6443, 5556
    TKGWorkload_to_TKGMgmtVIP TKG_Workload_Networks TKG_Management_ControlPlaneIPs TCP:6443
    TKG_to_AVI TKG_Management_Network TKG_Workload_Networks AVI_Management_Network TCP:443

    Optionally, you can also add the following firewall rules:

    • External to Bootstrap VM over Port 22 (Configure required SNAT)
    • External to AVI Controller over Port 22 (Configure required SNAT)
    • External to Tanzu Kubernetes Grid Management and Workload Cluster KubeVIP over port 6443 (Configure required SNAT)
  4. Create the following firewall rules in Networking & Security > Security > Gateway Firewall > Management Groups.

    Rule Name Source Group Name Destination Group Name Protocol and Port
    TKG_AVI_to_vCenter TKG_Management_Network TKG_Workload_Networks AVI_Management_Network vCenter_IP TCP:443

Request Public IP for Tanzu Kubernetes Nodes

Request public IP for Tanzu Kubernetes nodes to talk to the Internet nodes. You request the public IP in Networking & Security > System > Public IPs > REQUEST NEW IP.

The source NAT (SNAT) is automatically applied to all workloads in the SDDC to enable Internet access, and we have the firewall rules in place for Tanzu Kubernetes Grid components to talk to the Internet.

Configure VM Folders and Resource Pools in vCenter

  1. Create the required VM folders to collect the Tanzu Kubernetes Grid VMs and AVI Components. We recommend creating new folders for each TKG cluster.

  2. Create the required resource pools to deploy the Tanzu Kubernetes Grid and NSX Advanced Load Balancer components. We recommend deploying Tanzu Kubernetes Grid and NSX Advanced Load Balancer components on a separate resource pools.

  3. Download and import Base OS templates to vCenter. Download link.

    • Download and import all required Kubernetes versions. Ensure that the latest version is available in vCenter: “Photon v3 Kubernetes v1.20.5 vmware.2 OVA” is imported.
    • As of 1.3.1 TKG Management cluster will make use of the kube version “Photon v3 Kubernetes v1.20.5 vmware.2 OVA”.
    • For the purpose of creating Tanzu Kubernetes Grid workload clusters on the required versions, import the additional OVAs available in the Download link.
    • For the purpose of automation we could make use of Marketplace to push those images to vCenter.

Deploy and Configure NSX Advanced Load Balancer

The following is an overview of the steps for deploying and configuring NSX Advanced Load Balancer:

  1. Deploy AVI Controller
  2. AVI Controller Initial Setup
  3. Create Certificate for AVI using AVI Controller IP
  4. Create a VIP network in NSX Load Balancer
  5. Create IPAM Profile and Attach it to Default-Cloud
  6. Deploy AVI Service Engines

Deploy AVI Controller

We will deploy NSX Advanced Load Balancer as a cluster of three nodes. We will deploy the first, complete the required configuration, then deploy two more nodes to form the cluster. We will reserve the following IP addresses for deploying NSX Advanced Load Balancer:

Nodes IPs (GatewayCIDR:
1st Node(Leader)
2nd Node
3rd Node
Cluster IP

  1. Download the NSX Advanced Load Balancer OVA and deploy it in the resource pool created for NSX Advanced Load Balancer components. For this deployment, we will use the NSX Advanced Load Balancer version 20.1.5.
  2. During deployment select the network segment, m01avimgmt-seg01, created for AVI Management.

  3. (Optional) In order to access NSX Advance Load Balancer from the Internet, request a new public IP from Networking & Security > System > Public IPs > REQUEST NEW IP.

  4. Create the required NAT rule in Networking & Security > Networking & Security >  ADD NAT RULE.

  5. Create the following firewall rules in Networking & Security > Security > Gateway FirewallCompute Groups.

AVI Controller Initial Setup

  1. Go to https://AVI_Controller_IP.
  2. Create a new administrator account.

  3. In the next page enter the following parameters and click Save.

    Parameters Settings Sample Value
    System Settings PassphraseConfirm PassphraseDNS Name VMware123!VMware123!
    Email/SMTP Local Host (default)
    Multi-Tenant IP Route DomainService Engines are managed within the Per tenant IP route domainTenant(Tenant (Not shared across tenants)

Create Certificate Using the AVI Controller IP

  1. Log in to AVI Controller > Click the Menu tile > Templates > Security > SSL/TLS Certificates > Create.

  2. Provide the details as shown in the following screenshot and Save. The values provided in the screen capture are sample values. Change the values for your environment. Ensure that you provide all the IPs under SAN details.

  3. After the certificate is created, click the download icon and copy the certificate string.

    The certificate is required when you set up Tanzu Kubernetes Grid.

  4. Go to Administration > Settings > Access Settings, and click the pencil icon at the top right to edit the System Access Settings and replace the certificate.

Create a VIP network in NSX Load Balancer

In NSX Advanced Load Balancer, create a network for VIP interfaces.

  1. Click the Menu tile > Infrastructure > Networks > Create.
  2. Provide the required values as shown in following screen capture and click Save.

    Note: For this deployment use the network in m01tkg01-seg01.

Create IPAM Profile and Attach it to Default-Cloud

  1. Log in to AVI Controller.
  2. Click on Menu Tile > Template > Profiles > IPAM/DNS Profiles > Create > IPAM Profile.

  3. Enter the values provided in the following table and click Save.

    Key Value
    Name Profile_Name
    Type Avi Vantage IPAM
    Cloud for Usable Network Default-Cloud
    Usable Network VIP Network created in previous step

  4. Click Menu Tile > Infrastructure > Clouds > Default-Cloud > Create.

  5. In DHCP Settings, click the Edit icon, select the IPAM profile we created, and click Save.

Deploy NSX Advanced Load Balancer Service Engines

To deploy the NSX Advance Load Balancer service engines, download the OVA from the Avi Controller and deploy in SDDC.

  1. In the AVI Controller, go to   Menu tile > Infrastructure > Clouds > Default-Cloud.
  2. Click the download icon and select OVA.
  3. Click the key icon and copy the UUID and Token.
  4. Deploy the OVA in SDDC in the resource pool AVI_components and configure the interfaces:

    • Select Networks
      • 1st Interface: AVI Management
      • 2nd to 10th Interface: Data Networks

        Note: Do not connect the Tanzu Kubernetes workload Segment (w01tkg01-seg01) if you intend to use separate service engines for the workload cluster.
    • Customize template
      • IP Address of the Avi Controller: Cluster IP of AVI
      • Avi Service Engine Type: NETWORK_ADMIN
      • Authentication token for Avi Controller: Token from previous step
      • Controller Cluster UUID for Avi Controller: Cluster ID from previous step
      • Management Interface IP Address: Management IP for SE01
      • Management Interface Subnet Mask
      • Default Gateway
      • DNS details
      • Sysadmin login authentication key
  5. To verify the deployment, power on the VM. The service engine is visible in the AVI Controller UI.

  6. In vCenter, navigate to Summary of the SE > VM Hardware > expand Network adaptor 2 and copy the MAC address.

  7. In NSX Advanced Load Balancer, navigate to Infrastructure > Service Engine and edit the service engine.

  8. Find the interface that matches the MAC addresses obtained from vCenter, enable IPv4 DHCP for the MAC address, and Save.


  9. Repeat the steps to deploy the second service engine.

Configure Bootstrap Environment

The bootstrap machine is the laptop, host, or server on which you download and run the Tanzu CLI. This is where the initial bootstrapping of a management cluster occurs, before it is pushed to the platform where it will run.

For the purpose of this deployment, we will use of CentOS 8. (From an automation point of view, we can have all the required dependencies installed on the Photon OS, package it as OVA, and push the OVA to vCenter from Marketplace.)

  1. Deploy a VM (under the resource pools created for Tanzu Kubernetes Grid management) and install CentOS 8.

  2. To connect to the bootstrap VM over SSH from the Internet, create the following Inventory Group and Firewall rules:

    Create Inventory Group:

    Group Name Members
    Bootstrap_IP IP of the bootstrap machine VM

    Create a Firewall rule to allow SSH access to bootstrap machine VM from the Internet

    Rule Name Source Group Name Destination Group Name Protocol and Port
    Ext_to_Bootstrap_IP Any Bootstrap_IP TCP:22

    Optional: Create the following DNAT rule in Networking & Security > Networking & Security > ADD NAT RULE. Creating the DNAT rule allows you to access the bootstrap machine VM from the Internet. The bootstrap machine VM in this deployment has the IP address, which is connected to network segment m01tkg01-seg01.

  3. Ensure that NTP is configured on the bootstrap machine VM.

  4. Install Tanzu CLI, Docker, and kubectl on the bootstrap machine VM. The following steps are for CentOS.

    1. Install Tanzu CLI:

      • Download the Tanzu CLI bundle, tanzu-cli-bundle-v1.3.1-linux-amd64.tar, from here.
      • Import the Tanzu CLI bundle to the bootstrap VM (you may use SCP) and execute the following commands to install it.

        # Install TKG CLI
        tar -xvf tanzu-cli-bundle-v1.3.1-linux-amd64.tar
        cd ./cli/
        sudo install core/v1.3.1/tanzu-core-linux_amd64 /usr/local/bin/tanzu
        # Install TKG CLI Plugins
        cd ..
        tanzu plugin install --local cli all
        rm -rf ~/.tanzu/tkg/bom
        export TKG_BOM_CUSTOM_IMAGE_TAG="v1.3.1-patch1"
        tanzu management-cluster create   # This command produces an error but results in the BOM files being downloaded to ~/.tanzu/tkg/bom.
        # Install Carvel Tools
        cd ./cli
        # Install ytt
        gunzip ytt-linux-amd64-v0.31.0+vmware.1.gz
        chmod ugo+x ytt-linux-amd64-v0.31.0+vmware.1 && mv ./ytt-linux-amd64-v0.31.0+vmware.1 /usr/local/bin/ytt
        # Install kapp
        gunzip kapp-linux-amd64-v0.36.0+vmware.1.gz
        chmod ugo+x kapp-linux-amd64-v0.36.0+vmware.1 && mv ./kapp-linux-amd64-v0.36.0+vmware.1 /usr/local/bin/kapp
        # Install kbld
        gunzip kbld-linux-amd64-v0.28.0+vmware.1.gz
        chmod ugo+x kbld-linux-amd64-v0.28.0+vmware.1 && mv ./kbld-linux-amd64-v0.28.0+vmware.1 /usr/local/bin/kbld
        # Install imgpkg
        gunzip imgpkg-linux-amd64-v0.5.0+vmware.1.gz
        chmod ugo+x imgpkg-linux-amd64-v0.5.0+vmware.1 && mv ./imgpkg-linux-amd64-v0.5.0+vmware.1 /usr/local/bin/imgpkg
        # Install yq
        gunzip yq_linux_amd64.tar.gz
        tar -xvf yq_linux_amd64.tar
        chmod ugo+x yq_linux_amd64 && mv yq_linux_amd64 /usr/local/bin/yq
    2. Install Docker:

      sudo yum install -y yum-utils
      sudo yum-config-manager \
          --add-repo \

      sudo yum install -y docker-ce docker-ce-cli
      systemctl enable docker
      systemctl start docker
    3. Install kubectl:

      • Download the “kubectl cluster cli v1.20.5 for Linux” (for TKG 1.3.1) from here.

      • Import it to the Bootstrap VM and execute the following commands.

        gunzip kubectl-linux-v1.20.5-vmware.1.gz
        mv kubectl-linux-v1.20.5-vmware.1 /usr/local/bin/kubectl
        chmod +x /usr/local/bin/kubectl
  5. Create an SSH key pair. This is required for Tanzu CLI to connect to vSphere from the bootstrap machine. The public key part of the generated key will be passed during the Tanzu Kubernetes Grid management cluster deployment.

    1. Execute the following command.

      ssh-keygen -t rsa -b 4096 -C ""
    2. At the prompt, enter the file name to save the key (/root/.ssh/id_rsa).
    3. At the promo, press Enter to accept the default.
    4. Enter and repeat a password for the key pair.
    5. Add the private key to the SSH agent running on your machine, and enter the password you created in the previous step.

      ssh-add ~/.ssh/id_rsa

      If the above command fails, execute eval $(ssh-agent) and then rerun it.

    6. Open .ssh/ and copy the public key contents. You will use it to create the config file for deploying the Tanzu Kubernetes Grid management cluster.

Deploy Management Cluster

You will deploy the management cluster from the Tanzu Kubernetes Grid Installer UI.

  1. To access the installer UI from external machines, execute the following command on the bootstrap VM:

    tanzu management-cluster create --ui --bind <IP_Of_BootstrapVM>:8080 --browser none

    With firewall rules in place, you should be able to access the UI from the Internet.

  2. On the VMware vSphere tile, click Deploy.

  3. For IaaS Provider enter the following information and click Next: For SSH Public Key, copy and paste the contents of .ssh/id\ from the bootstrap machine VM.

  4. For Management Cluster Settings, enter the following and click Next.

    Type: Prod
    Instance Type: Large

  5. For VMware NSX Advanced Load Balancer,

    1. Obtain the AVI Controller certificate using: 
      echo -n | openssl s\_client -connect <AVI\_Controller\_IP:443>  
    2. Enter the following information and and click Next.

      For **Cluster Labels**, enter
      • Key: type
      • Valuetkg-mgmt-cluster

    Note: Ensure that the Cluster Label is set. This is required because the Tanzu Kubernetes Grid workload cluster will not make use of the AKO config.

  6. For Metadata, Specify Labels for the Management Cluster, and click Next. Use the same label provided for the VMware NSX Advanced Load Balancer settings.

  7. For Resource Settings, enter the following and click Next:

  8. For Kubernetes Network Settings, enter the following and click Next.

  9. Disable Identity Management and click Next.

  10. For OS Image, select the OS image which you imported earlier and click Next.

  11. For Register with Tanzu Mission Control, enter the Registration URL and and click Next.

    To get the Registration URL,

    1. Log in to Tanzu Mission Control from the CSP portal.
    2. Go to Administration > Management Clusters > Register Management Cluster > Tanzu Kubernetes Grid .

    3. Under the Register Management Cluster pane, enter Name, select the cluster group, click Next, and copy the registration link.

  12. Accept the EULA and click Next.

  13. Review the configuration and copy the CLI. You will use the CLI to initiate the deployment from bootstrap machine VM.

    Alternatively, use of the following sample configuration file to deploy the management cluster.

    Sample mgmtconfig.yaml  

    AVI_CLOUD_NAME: tkgvmc-cloud01
    AVI_DATA_NETWORK: tkgvmc-tkgmgmt-data-network01
    AVI_ENABLE: "true"
        'type': 'management'
    AVI_PASSWORD: <encoded:Vk13YXJlMTIzIQ==>
    AVI_SERVICE_ENGINE_GROUP: tkgvmc-tkgmgmt-group01
    AVI_USERNAME: admin
    CLUSTER_NAME: vmc-tkg-mgmt-01
    ENABLE_MHC: "true"
    VSPHERE_DATASTORE: /SDDC-Datacenter/datastore/WorkloadDatastore
    VSPHERE_PASSWORD: <encoded:dioyU1I3ck5DSmRGZXAt>
    VSPHERE_RESOURCE_POOL: /SDDC-Datacenter/host/Cluster-1/Resources/TKGVMC-TKG-Mgmt
    VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDGxNWes6xJO6o/OzW7uE7eH4ndKFy717dbHuv5Z7WKmqz5igw/SnY3VK+nPtGK4NonnFlVfNSRpjTy/aWhl2EfM0pPwOEdglqa0HivxbsgjSHG8dxDzYmh8/ekTJwhgmqJgLrkxvPpyYxKCY+/IoG5Y3I73yfVJxpIWrtTlZXJsMYOcQZQQhwkJp3UyfwRwi0ZEN7JvmGFWeKetQLQfJrfkLKcH/nsO+HXteQFsOvIdNjwN3QG475DpO6epTQaXMPiVGfBabo/lPgVj7NLwbDPTuLVWryrv+FJQgXJb/D1xvEPhlHICqOyvJilKfmuuYnQST8VCU7Kpem8qD+YrK0iiCS31Ea9Y9b+wD21q4acjCN2vAIsWfNtLmmtrEXSR9pyypv0SRLOAnDkatpF6PxMUZZgm+iMsjbOQ0r/DD5c40nYcse65ioi5HQTGUhwFv8HcA/QgXiQQnTdN35NHNTQlyKj/zXugJP7Pe4jASQA7MGEuH4SxvHm7tQ6lYCGq7/yI+d2Fl67101cemKw2U5UcWuhBgWIdZ8434pSSQn776c3y73SsPGhN0RkoGwj82NGIPFkDLXet98JO4DP4M78S1qscQccBDt0qnmMQ9ViD4Pn3NLck7uuXwMb9jIp3BJj1WtajaC0ZXPPVDa9Kxt7fF/CjDnWGMP32qnCYbx0iQ== cloudadmin@vmc.local
    VSPHERE_USERNAME: cloudadmin@vmc.local

    For Automation, use the following:

    export DEPLOY_TKG_ON_VSPHERE7=true
    tanzu management-cluster create -y --file /path_to/config.yaml -v 6
    tanzu management-cluster kubeconfig get m01tkg01 --admin --export-file /path_to/kubeconfig.yaml
    export TMC_API_TOKEN="zeQHS8pVk5Y1ub9htejsYt3AyMY8022Hg3VzJGv3A2qfv7dZbxw1fM5tNgXS2ssd"
    tmc login --no-configure -name demo
    tmc managementcluster register vmc-m01tkg01 -c default -p TKG -k /path_to/kubeconfig.yaml

  14. On the bootstrap machine VM, execute the following command to start the cluster creation:

  15. After the Tanzu Kubernetes Grid Management Cluster is deployed, you can verify the cluster on the bootstrap machine VM, vCenter, and Tanzu Mission Control.

    On the Bootstrap Machine VM:

    Execute the following commands to check the status of the management cluster:

    tanzu management-cluster get
    kubectl config use-context <mgmt_cluster_name>-admin@<mgmt_cluster_name>
    kubectl get nodes
    kubectl get pods -A

    On vCenter:

    On Tanzu Mission Control

    You can now create Tanzu Kubernetes Grid workload clusters and make use of the backup services from Tanzu Mission Control.
    Refer 10. Attach an existing TKG Guest Cluster to TMC, Enable and test Data Protection for enabling data protection

Set up a Shared Workload Cluster

Follow these steps to set up a shared workload cluster:

  1. Create a Shared Workload Cluster
  2. Deploy Contour on Shared Service Cluster
  3. Deploy Harbor on Shared Service Cluster

Create a Shared Workload Cluster

  1. Create Shared cluster using TMC CLI:

    tmc cluster create -t tkg-vsphere -n tkg-shared -m vmc-tkg-mgmt01 -p default --cluster-group default --control-plane-endpoint --ssh-key "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDH3HkX/32YEWigW6wcez65KFxjkkYP1Qn8NWLK7rs+/CXmVfmV8RnVcfpFu9VERe1j5UQEQXW9p15KMeCZ2s+omoo2dCsakKVIU7OlQcEko2gSYKmSNcnxwcFr7BWho9E278iIaHsDnV+N1CpjqUeWPzDxuLuuAc0EPHzgnz2lWQKknR68N9SWmWP108jnkHQP+ATybKeop57+mP9k5wNo1OOSbSooiPMdBGPlfZIQ+WaGSdNLPUUuzfic2fONJdE5OWRezPuCWRGR8rFsYZQ/O6zf7Y3zdv9ZU6NYnpGRkKVdDUDhusvaD58HlbW4nJ4PmP7hpsmEKH3QqH8DOpIA8ZxLR7YCqdPHRJEKLUuBtaUmb3NC3cDgwMiDWVF0s3OspDUYso+OpX8lk1etiLnSeCcpwC68GP17G/dmu9dEKAynfma7blfSETVCboY/FPCAllAqtfR/zohoE8iFHyRwW26O4wtMX0jhhXvl/1HgJlykycvHdoBKv2UEP2NGh4uaLSPaSLuh3IZZaceQWm3yKqPFhZwYqFM7Kp2OJBC2ilweNd4oG65ocfWPznngqBkVu65j+Z0pOsXF+xLtxVxZsqtQI+pE+Wi21VS+hR8Qzy0NW+glZ8m63LdCDSkESN8iYdUQBgbDmtdYw0o6HusGMNbCjie6fqIU4suZYlECjw==" --version v1.20.5+vmware.2-tkg.1 --datacenter /SDDC-Datacenter --datastore /SDDC-Datacenter/datastore/WorkloadDatastore --folder /SDDC-Datacenter/vm/m01tkg01 --resource-pool /SDDC-Datacenter/host/Cluster-1/Resources/m01tkg01 --workspace-network /SDDC-Datacenter/network/TKG-SharedService-Segment --control-plane-cpu 8 --control-plane-disk-gib 80 --control-plane-memory-mib 16384 --worker-node-count 1  --worker-cpu 4 --worker-disk-gib 40 --worker-memory-mib 32768

  2. Obtain admin credentials for the Shared Cluster:

    tanzu cluster kubeconfig get <Shared_Cluster_Name> --admin
    # Run command:
    tanzu cluster kubeconfig get tkg-shared --admin
    # Sample output:
    #  Credentials of cluster 'tkg-shared' have been saved
    #  You can now access the cluster by running 'kubectl config use-context tkg-shared-admin@tkg-shared'

  3. Connect to the management cluster using TKG CLI and add the following tags:

    kubectl config use-context tkg-mgmt01-admin@tkg-mgmt01   					# Connect to TKG Management Cluster
    kubectl label<Shared_Cluster_Name>"" --overwrite=true
    kubectl label cluster <Shared_Cluster_Name> type=workload   				# Based on the match labels provided in AKO config file
    # Run command:
    tanzu cluster list --include-management-cluster
    # Sample output:
    #  tkg-shared  default     running  1/1           1/1      v1.20.5+vmware.2  tanzu-services  dev   
    #  tkg-mgmt01  tkg-system  running  1/1           1/1      v1.20.5+vmware.2  management      dev

  4. Download VMware Tanzu Kubernetes Grid Extensions Manifest 1.3.1 from here.

  5. Unpack the manifest using the following command.

    tar -xzf tkg-extensions-manifests-v1.3.1-vmware.1.tar.gz
  6. Connect to the shared services cluster using the credentials obtained in step 2 and install cert-manager.

    kubectl config use-context tkg-shared-admin@tkg-shared    ##Connect to the Shared Cluster
    cd ./tkg-extensions-v1.3.1+vmware.1/
    kubectl apply -f cert-manager/
    # Ensure required pods are running
    # Sample output:
    #   [root@bootstrap tkg-extensions-v1.3.1+vmware.1]# kubectl get pods -A | grep cert-manager
    #   cert-manager        cert-manager-7c58cb795-b8n4b                                   1/1     Running     0          42s
    #   cert-manager        cert-manager-cainjector-765684c9d6-mzdqs                       1/1     Running     0          42s
    #   cert-manager        cert-manager-webhook-ccc946479-dxlcw                           1/1     Running     0          42s
    #  [root@bootstrap tkg-extensions-v1.3.1+vmware.1]# kubectl get pods -A | grep kapp
    #  tkg-system          kapp-controller-6d7855d4dd-zn4rs                               1/1     Running     0          106m

Deploy Contour on the Shared Services Cluster

Execute the following commands to deploy Contour on the shared services cluster.

cd ./tkg-extensions-v1.3.1+vmware.1/extensions/ingress/contour
kubectl apply -f namespace-role.yaml
cp ./vsphere/contour-data-values-lb.yaml.example ./vsphere/contour-data-values.yaml
kubectl create secret generic contour-data-values --from-file=values.yaml=vsphere/contour-data-values.yaml -n tanzu-system-ingress
kubectl apply -f contour-extension.yaml

# Validate
kubectl get app contour -n tanzu-system-ingress

# Note: Once the Contour app is deployed successfully, the status should change from Reconciling to Reconcile Succeeded

# Sample output:
#  kubectl get app contour -n tanzu-system-ingress
#  contour   Reconciling   2m40s          2m40s

# Wait till we see "Reconciling succeeded" (can take 3-5mins)
#  contour   Reconcile succeeded   112s           5m46s

# Capture the envoy external IP:
kubectl get svc -A | grep envoy

# Sample output:
#   kubectl get svc -A | grep envoy
#   tanzu-system-ingress    envoy    LoadBalancer   80:31343/TCP,443:31065/TCP   12h

To access Envoy Administration:

ENVOY_POD=$(kubectl -n tanzu-system-ingress get pod -l app=envoy -o name | head -1)  
kubectl -n tanzu-system-ingress port-forward --address $ENVOY_POD 80:9001  

When you have started running workloads in your Tanzu Kubernetes cluster, you can visualize the traffic information in Contour.

CONTOUR_POD=$(kubectl -n tanzu-system-ingress get pod -l app=contour -o name | head -1)  
kubectl -n tanzu-system-ingress port-forward $CONTOUR_POD 6060  
curl localhost:6060/debug/dag | dot -T png > contour-dag.png

Deploy Harbor on the Shared Services Cluster

  1. Execute the following commands to deploy Harbor not the shared services cluster.

    cd ./tkg-extensions-v1.3.1+vmware.1/extensions/registry/harbor
    kubectl apply -f namespace-role.yaml
    cp harbor-data-values.yaml.example harbor-data-values.yaml
    ./ harbor-data-values.yaml           ## Generates Random Passwords for "harborAdminPassword", "secretKey", "database.password", "core.secret", "core.xsrfKey", "jobservice.secret", and "registry.secret" ##
    # Update the "hostname" value in "harbor-data-values.yaml" file with the FQDN for accessing Harbor
    # (Optional)If using custome or CA certs: Before executing the below steps, update "harbor-data-values.yaml" with the certs, refer step 2 (Updating certs is optional, if certs are not provided, Cert-Manager will generate required certs)
    kubectl create secret generic harbor-data-values --from-file=values.yaml=harbor-data-values.yaml -n tanzu-system-registry
    kubectl apply -f harbor-extension.yaml
    # Validate
    kubectl get app contour -n tanzu-system-ingress
    # Note: Once the Harbor app is deployed successfully, the status should change from Reconciling to Reconcile Succeeded
    # Sample output:
    #  kubectl get app harbor -n tanzu-system-registry
    #  harbor   Reconciling           1m50s          1m50s
    # Wait until we see "Reconciling succeeded" (can take 3-5mins)
    #  harbor   Reconcile succeeded   5m45s          81m

  2. (Optional) Update the harbor-data-values.yaml file with the hostname and certificates. Following is an example of the YAML file.

    Sample harbor-data-values.yaml

    #@overlay/match-child-defaults missing_ok=True
    # Docker images setting
      tag: v2.1.3_vmware.1
      pullPolicy: IfNotPresent
    # The namespace to install Harbor
    namespace: tanzu-system-registry
    # The FQDN for accessing Harbor admin UI and Registry service.
    # The network port of the Envoy service in Contour or other Ingress Controller.
      https: 443
    # [Optional] The certificate for the ingress if you want to use your own TLS certificate.
    # We will issue the certificate by cert-manager when it's empty.
      # [Required] the certificate
      tls.crt: |
            -----BEGIN CERTIFICATE-----
            -----END CERTIFICATE-----
            -----BEGIN CERTIFICATE-----
            -----END CERTIFICATE-----
            -----BEGIN CERTIFICATE-----
            -----END CERTIFICATE-----
      # [Required] the private key
      tls.key: |
            -----BEGIN PRIVATE KEY-----
            -----END PRIVATE KEY-----
      # [Optional] the certificate of CA, this enables the download
      # link on portal to download the certificate of CA
    # Use contour http proxy instead of the ingress when it's true
    enableContourHttpProxy: true
    # [Required] The initial password of Harbor admin.
    harborAdminPassword: VMware123!
    # [Required] The secret key used for encryption. Must be a string of 16 chars.
    secretKey: 44z5mmTRiDAd3r7o
      # [Required] The initial password of the postgres database.
      password: L92Lwf92x4nkh2XB
      replicas: 1
      # [Required] Secret is used when core server communicates with other components.
      secret: VmMoXdxVJ00PLmoD
      # [Required] The XSRF key. Must be a string of 32 chars.
      xsrfKey: DnvQN508M97mGmtK9248sCQ0pFD82BhV
      replicas: 1
      # [Required] Secret is used when job service communicates with other components.
      secret: HtRDVOswYgsOoSV7
      replicas: 1
      # [Required] Secret is used to secure the upload state from client
      # and registry storage backend.
      # See:
      secret: r9MYJfjMVRrzpkiT
      # Whether to install Notary
      enabled: true
      # Whether to install Clair scanner
      enabled: true
      replicas: 1
      # The interval of clair updaters, the unit is hour, set to 0 to
      # disable the updaters
      updatersInterval: 12
      # enabled the flag to enable Trivy scanner
      enabled: true
      replicas: 1
      # gitHubToken the GitHub access token to download Trivy DB
      gitHubToken: ""
      # skipUpdate the flag to disable Trivy DB downloads from GitHub
      # You might want to set the value of this flag to `true` in test or CI/CD environments to avoid GitHub rate limiting issues.
      # If the value is set to `true` you have to manually download the `trivy.db` file and mount it in the
      # `/home/scanner/.cache/trivy/db/trivy.db` path.
      skipUpdate: false
    # The persistence is always enabled and a default StorageClass
    # is needed in the k8s cluster to provision volumes dynamicly.
    # Specify another StorageClass in the "storageClass" or set "existingClaim"
    # if you have already existing persistent volumes to use
    # For storing images and charts, you can also use "azure", "gcs", "s3",
    # "swift" or "oss". Set it in the "imageChartStorage" section
          # Use the existing PVC which must be created manually before bound,
          # and specify the "subPath" if the PVC is shared with other components
          existingClaim: ""
          # Specify the "storageClass" used to provision the volume. Or the default
          # StorageClass will be used(the default).
          # Set it to "-" to disable dynamic provisioning
          storageClass: ""
          subPath: ""
          accessMode: ReadWriteOnce
          size: 10Gi
          existingClaim: ""
          storageClass: ""
          subPath: ""
          accessMode: ReadWriteOnce
          size: 1Gi
          existingClaim: ""
          storageClass: ""
          subPath: ""
          accessMode: ReadWriteOnce
          size: 1Gi
          existingClaim: ""
          storageClass: ""
          subPath: ""
          accessMode: ReadWriteOnce
          size: 1Gi
          existingClaim: ""
          storageClass: ""
          subPath: ""
          accessMode: ReadWriteOnce
          size: 5Gi
      # Define which storage backend is used for registry and chartmuseum to store
      # images and charts. Refer to
      # for the detail.
        # Specify whether to disable `redirect` for images and chart storage, for
        # backends which not supported it (such as using minio for `s3` storage type), please disable
        # it. To disable redirects, simply set `disableredirect` to `true` instead.
        # Refer to
        # for the detail.
        disableredirect: false
        # Specify the "caBundleSecretName" if the storage service uses a self-signed certificate.
        # The secret must contain keys named "ca.crt" which will be injected into the trust store
        # of registry's and chartmuseum's containers.
        # caBundleSecretName:
        # Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift",
        # "oss" and fill the information needed in the corresponding section. The type
        # must be "filesystem" if you want to use persistent volumes for registry
        # and chartmuseum
        type: filesystem
          rootdirectory: /storage
          #maxthreads: 100
          accountname: accountname # required
          accountkey: base64encodedaccountkey # required
          container: containername # required
          realm: # optional
          bucket: bucketname # required
          # The base64 encoded json file which contains the key
          encodedkey: base64-encoded-json-key-file # optional
          rootdirectory: null # optional
          chunksize: 5242880 # optional
          region: us-west-1 # required
          bucket: bucketname # required
          accesskey: null # eg, awsaccesskey
          secretkey: null # eg, awssecretkey
          regionendpoint: null # optional, eg, http://myobjects.local
          encrypt: false # optional
          keyid: null # eg, mykeyid
          secure: true # optional
          v4auth: true # optional
          chunksize: null # optional
          rootdirectory: null # optional
          storageclass: STANDARD # optional
          username: username
          password: password
          container: containername
          region: null # eg, fr
          tenant: null # eg, tenantname
          tenantid: null # eg, tenantid
          domain: null # eg, domainname
          domainid: null # eg, domainid
          trustid: null # eg, trustid
          insecureskipverify: null # bool eg, false
          chunksize: null # eg, 5M
          prefix: null # eg
          secretkey: null # eg, secretkey
          accesskey: null # eg, accesskey
          authversion: null # eg, 3
          endpointtype: null # eg, public
          tempurlcontainerkey: null # eg, false
          tempurlmethods: null # eg
          accesskeyid: accesskeyid
          accesskeysecret: accesskeysecret
          region: regionname
          bucket: bucketname
          endpoint: null # eg, endpoint
          internal: null # eg, false
          encrypt: null # eg, false
          secure: null # eg, true
          chunksize: null # eg, 10M
          rootdirectory: null # eg, rootdirectory
    # The http/https network proxy for clair, core, jobservice, trivy

check-circle-line exclamation-circle-line close-line
Scroll to top icon