AWS Cluster Configuration Files

This topic explains how to use a flat configuration file or Kubernetes-style object spec to configure a Tanzu Kubernetes Grid (TKG) workload cluster before deploying it to Amazon Web Services (AWS) with a standalone management cluster.

For general information about how configure workload clusters by using configuration files and object specs, see Configuration Files and Object Specs.

To use AWS-specific workload cluster features that require some configuration outside of the cluster’s configuration file or object spec, see Clusters on AWS.

Important

Tanzu Kubernetes Grid v2.4.x is the last version of TKG that supports the creation of TKG workload clusters on AWS. The ability to create TKG workload clusters on AWS will be removed in the Tanzu Kubernetes Grid v2.5 release.

Starting from now, VMware recommends that you use Tanzu Mission Control to create native AWS EKS clusters instead of creating new TKG workload clusters on AWS. For information about how to create native AWS EKS clusters with Tanzu Mission Control, see Managing the Lifecycle of AWS EKS Clusters in the Tanzu Mission Control documentation.

For more information, see Deprecation of TKG Management and Workload Clusters on AWS and Azure in the VMware Tanzu Kubernetes Grid v2.4 Release Notes.

Overview

To configure a workload cluster before deploying it to AWS, you create a cluster configuration file or a Kubernetes-style object spec file. When you pass either of these files to the -f option of tanzu cluster create, the Tanzu CLI uses the configuration information defined in the file to connect to your AWS account and create the resources that the cluster will use. For example, you can specify the sizes for the control plane and worker node VMs, distribute nodes across availability zones, and share VPCs between clusters.

For the full list of options that you must specify when deploying workload clusters to AWS, see the Configuration File Variable Reference.

Note

To improve security in a multi-tenant environment, deploy workload clusters to an AWS account that is different from the one used to deploy the management cluster. To deploy workload clusters across multiple AWS accounts, see Clusters on Different AWS Accounts.

Create a Configuration File

To create a cluster configuration file, you can use the template in Workload Cluster Template below. After creating the configuration file, proceed to Create Workload Clusters.

Workload Cluster Template

The template below includes all of the options that are relevant to deploying workload clusters on AWS. You can copy this template and update it to deploy workload clusters to AWS.

Mandatory options are uncommented. Optional settings are commented out. Default values are included where applicable.

With the exception of the options described in the sections below the template, the way in which you configure the variables for workload clusters that are specific to AWS is identical for both management clusters and workload clusters. For information about how to configure the variables, see Deploy Management Clusters from a Configuration File and Management Cluster Configuration for AWS.

#! ---------------------------------------------------------------------
#! Cluster creation basic configuration
#! ---------------------------------------------------------------------

#! CLUSTER_NAME:
CLUSTER_PLAN: dev
NAMESPACE: default
# CLUSTER_API_SERVER_PORT:
CNI: antrea

#! ---------------------------------------------------------------------
#! Node configuration
#! AWS-only MACHINE_TYPE settings override cloud-agnostic SIZE settings.
#! ---------------------------------------------------------------------

# SIZE:
# CONTROLPLANE_SIZE:
# WORKER_SIZE:
CONTROL_PLANE_MACHINE_TYPE: t3.large
NODE_MACHINE_TYPE: m5.large
# NODE_MACHINE_TYPE_1: ""
# NODE_MACHINE_TYPE_2: ""
# CONTROL_PLANE_MACHINE_COUNT: 1
# WORKER_MACHINE_COUNT: 1
# WORKER_MACHINE_COUNT_0:
# WORKER_MACHINE_COUNT_1:
# WORKER_MACHINE_COUNT_2:

#! ---------------------------------------------------------------------
#! AWS Configuration
#! ---------------------------------------------------------------------

AWS_REGION:
# AWS_LOAD_BALANCER_SCHEME_INTERNAL: false
AWS_NODE_AZ: ""
# AWS_NODE_AZ_1: ""
# AWS_NODE_AZ_2: ""
# AWS_VPC_ID: ""
# AWS_PRIVATE_SUBNET_ID: ""
# AWS_PUBLIC_SUBNET_ID: ""
# AWS_PUBLIC_SUBNET_ID_1: ""
# AWS_PRIVATE_SUBNET_ID_1: ""
# AWS_PUBLIC_SUBNET_ID_2: ""
# AWS_PRIVATE_SUBNET_ID_2: ""
# AWS_VPC_CIDR: 10.0.0.0/16
# AWS_PRIVATE_NODE_CIDR: 10.0.0.0/24
# AWS_PUBLIC_NODE_CIDR: 10.0.1.0/24
# AWS_PRIVATE_NODE_CIDR_1: 10.0.2.0/24
# AWS_PUBLIC_NODE_CIDR_1: 10.0.3.0/24
# AWS_PRIVATE_NODE_CIDR_2: 10.0.4.0/24
# AWS_PUBLIC_NODE_CIDR_2: 10.0.5.0/24
# AWS_SECURITY_GROUP_APISERVER_LB: ""
# AWS_SECURITY_GROUP_BASTION: ""
# AWS_SECURITY_GROUP_CONTROLPLANE: ""
# AWS_SECURITY_GROUP_LB: ""
# AWS_SECURITY_GROUP_NODE: ""
# AWS_IDENTITY_REF_KIND: ""
# AWS_IDENTITY_REF_NAME: ""
# AWS_CONTROL_PLANE_OS_DISK_SIZE_GIB: 80
# AWS_NODE_OS_DISK_SIZE_GIB: 80
AWS_SSH_KEY_NAME:
BASTION_HOST_ENABLED: true

#! ---------------------------------------------------------------------
#! Common configuration
#! ---------------------------------------------------------------------

# TKG_CUSTOM_IMAGE_REPOSITORY: ""
# TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: false
# TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: ""

# TKG_HTTP_PROXY: ""
# TKG_HTTPS_PROXY: ""
# TKG_NO_PROXY: ""
# TKG_PROXY_CA_CERT: ""

ENABLE_AUDIT_LOGGING: false
ENABLE_DEFAULT_STORAGE_CLASS: true

CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13

# OS_NAME: ""
# OS_VERSION: ""
# OS_ARCH: ""

#! ---------------------------------------------------------------------
#! Autoscaler configuration
#! ---------------------------------------------------------------------

ENABLE_AUTOSCALER: false
# AUTOSCALER_MAX_NODES_TOTAL: "0"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_ADD: "10m"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_DELETE: "10s"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_FAILURE: "3m"
# AUTOSCALER_SCALE_DOWN_UNNEEDED_TIME: "10m"
# AUTOSCALER_MAX_NODE_PROVISION_TIME: "15m"
# AUTOSCALER_MIN_SIZE_0:
# AUTOSCALER_MAX_SIZE_0:
# AUTOSCALER_MIN_SIZE_1:
# AUTOSCALER_MAX_SIZE_1:
# AUTOSCALER_MIN_SIZE_2:
# AUTOSCALER_MAX_SIZE_2:

#! ---------------------------------------------------------------------
#! Antrea CNI configuration
#! ---------------------------------------------------------------------

# ANTREA_NO_SNAT: false
# ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD: false
# ANTREA_TRAFFIC_ENCAP_MODE: "encap"
# ANTREA_EGRESS_EXCEPT_CIDRS: ""
# ANTREA_NODEPORTLOCAL_ENABLED: true
# ANTREA_NODEPORTLOCAL_PORTRANGE: 61000-62000
# ANTREA_PROXY: true
# ANTREA_PROXY_ALL: false
# ANTREA_PROXY_NODEPORT_ADDRS: ""
# ANTREA_PROXY_SKIP_SERVICES: ""
# ANTREA_PROXY_LOAD_BALANCER_IPS: false
# ANTREA_FLOWEXPORTER_COLLECTOR_ADDRESS: "flow-aggregator.flow-aggregator.svc:4739:tls"
# ANTREA_FLOWEXPORTER_POLL_INTERVAL: "5s"
# ANTREA_FLOWEXPORTER_ACTIVE_TIMEOUT: "30s"
# ANTREA_FLOWEXPORTER_IDLE_TIMEOUT: "15s"
# ANTREA_KUBE_APISERVER_OVERRIDE:
# ANTREA_TRANSPORT_INTERFACE:
# ANTREA_TRANSPORT_INTERFACE_CIDRS: ""
# ANTREA_MULTICAST_INTERFACES: ""
# ANTREA_MULTICAST_IGMPQUERY_INTERVAL: "125s"
# ANTREA_TUNNEL_TYPE: geneve
# ANTREA_ENABLE_USAGE_REPORTING: false
# ANTREA_ENABLE_BRIDGING_MODE: false
# ANTREA_DISABLE_TXCHECKSUM_OFFLOAD: false
# ANTREA_DNS_SERVER_OVERRIDE: ""
# ANTREA_MULTICLUSTER_ENABLE: false
# ANTREA_MULTICLUSTER_NAMESPACE: ""

Specifying the Region

When you deploy a cluster to AWS, set AWS_REGION to the region in which you want to deploy the cluster. You can set AWS_REGION in the cluster configuration file, a local environment variable or a credential profile, as described in Configure AWS Account Credentials.

Distributing Workers Across AZs

When you create a multi-AZ prod cluster on AWS, Tanzu Kubernetes Grid evenly distributes its control plane and worker nodes across the Availability Zones (AZs) that you specified in your cluster configuration. This includes workload clusters that are configured with any of the following:

  • The default number of control plane nodes
  • The CONTROL_PLANE_MACHINE_COUNT setting that is greater than the default number of control plane nodes
  • The default number of worker nodes
  • The WORKER_MACHINE_COUNT setting that is greater than the default number of worker nodes

For example, if you specify WORKER_MACHINE_COUNT: 5, Tanzu Kubernetes Grid deploys two worker nodes in the first AZ, two worker nodes in the second AZ, and one worker node in the third AZ. You can optionally customize this default AZ placement mechanism for worker nodes by following the instructions in Configuring AZ Placement Settings for Worker Nodes below. You cannot customize the default AZ placement mechanism for control plane nodes.

Configuring AZ Placement Settings for Worker Nodes

When creating a multi-AZ prod cluster on AWS, you can optionally specify how many worker nodes the tanzu cluster create command creates in each of the three AZs.

To do so:

  1. Include the following variables in your cluster configuration file:

    • WORKER_MACHINE_COUNT_0: Sets the number of worker nodes in the first AZ, AWS_NODE_AZ.
    • WORKER_MACHINE_COUNT_1: Sets the number of worker nodes in the second AZ, AWS_NODE_AZ_1.
    • WORKER_MACHINE_COUNT_2: Sets the number of worker nodes in the third AZ, AWS_NODE_AZ_2.

    You set these variables in addition to other mandatory and optional settings, as described in Workload Cluster Template above.

  2. Create the cluster. For example:

    tanzu cluster create my-prod-cluster -f my-prod-cluster-config.yaml
    

Deploying a Prod Cluster from a Dev Management Cluster

When you create a prod workload cluster from a dev management cluster that is running on AWS, you must define a subset of additional variables in the cluster configuration file before running the tanzu cluster create command. This enables Tanzu Kubernetes Grid to create the cluster and spread its control plane and worker nodes across AZs.

To create a prod workload cluster from a dev management cluster on AWS, perform the steps below:

  1. Set the following variables in the cluster configuration file:

    • Set PLAN to prod.
    • AWS_NODE_AZ variables: AWS_NODE_AZ was set when you deployed your dev management cluster. For the prod workload cluster, add AWS_NODE_AZ_1 and AWS_NODE_AZ_2.
    • AWS_PUBLIC_SUBNET_ID (existing VPC) variables: AWS_PUBLIC_NODE_CIDR or AWS_PUBLIC_SUBNET_ID was set when you deployed your dev management cluster. For the prod workload cluster, add one of the following:
      • AWS_PUBLIC_NODE_CIDR_1 and AWS_PUBLIC_NODE_CIDR_2
      • AWS_PUBLIC_SUBNET_ID_1 and AWS_PUBLIC_SUBNET_ID_2
    • AWS_PRIVATE_SUBNET_ID (existing VPC) variables: AWS_PRIVATE_NODE_CIDR or AWS_PRIVATE_SUBNET_ID was set when you deployed your dev management cluster. For the prod workload cluster, add one of the following:
      • AWS_PRIVATE_NODE_CIDR_1 and AWS_PRIVATE_NODE_CIDR_2
      • AWS_PRIVATE_SUBNET_ID_1 and AWS_PRIVATE_SUBNET_ID_2
  2. (Optional) Customize the default AZ placement mechanism for the worker nodes that you intend to deploy by following the instructions in Configuring AZ Placement Settings for Worker Nodes. By default, Tanzu Kubernetes Grid distributes prod worker nodes evenly across the AZs.

  3. Deploy the cluster by running the tanzu cluster create command. For example:

    tanzu cluster create my-cluster -f my-cluster-config.yaml
    

Create an Object Spec File

You can use the Tanzu CLI to convert a cluster configuration file into a Kubernetes-style object spec file for a class-based workload cluster without deploying the cluster:

  • To generate an object spec file for every class-based cluster that you create with tanzu cluster create, ensure that the auto-apply-generated-clusterclass-based-configuration feature is set to false in the configuration of the Tanzu CLI. This feature is set to false by default. When auto-apply-generated-clusterclass-based-configuration is set to false and you run tanzu cluster create with the --file flag, the command converts your cluster configuration file into an object spec file and exits without creating the cluster. After reviewing the configuration, you re-run tanzu cluster create with the object spec file generated by the Tanzu CLI. If you have updated the default configuration, to set it back to false, run:

    tanzu config set features.cluster.auto-apply-generated-clusterclass-based-configuration false
    
  • To generate an object spec file for a single cluster, pass the --dry-run option to tanzu cluster create and save the output to a file. Use the same options and configuration --file that you would use if you were creating the cluster, for example:

    tanzu cluster create my-cluster --file my-cluster-config.yaml --dry-run > my-cluster-spec.yaml
    

    You can then use this object spec to deploy a cluster as described in Create a Class-Based Cluster from the Object Spec.

What to Do Next

Proceed to Create Workload Clusters.

check-circle-line exclamation-circle-line close-line
Scroll to top icon