Before you can use the Tanzu Kubernetes Grid CLI or installer interface to deploy a management cluster, you must prepare the machine on which you run the Tanzu Kubernetes Grid CLI, and set up your Amazon EC2 account.
clusterawsadmUtility and Set Up a CloudFormation Stack
The AWS CLI uses
jq to process JSON when creating SSH key pairs. It is also used to prepare the environment or configuration variables when you deploy Tanzu Kubernetes Grid by using the CLI.
For each cluster that you create, Tanzu Kubernetes Grid provisions a set of resources in your Amazon Web Services account.
For development clusters that are not configured for high availability, Tanzu Kubernetes Grid provisions the following resources:
For production clusters that are configured for high availability, Tanzu Kubernetes Grid provisions the resources listed above and the following additional resources to support replication in two additional availability zones:
Amazon Web Services implements a set of default limits or quotas on these types of resources, and allows you to modify the limits. Typically the default limits are sufficient to get started creating clusters from Tanzu Kubernetes Grid. However, as you increase the number of clusters you are running or the workloads on your clusters, you will encroach on these limits. When you reach the limits imposed by Amazon Web Services, any attempts to provision that type of resource fail. As a result, Tanzu Kubernetes Grid will be unable to create a new cluster, or you might be unable to create additional deployments on your existing clusters. Therefore, regularly assess the limits you have specified in Amazon Web Services account, and adjust them as necessary to fit your business needs.
If you create a new Virtual Private Cloud (VPC) when you deploy a management cluster, Tanzu Kubernetes Grid also creates a dedicated NAT gateway for the management cluster. In this case, by default, Tanzu Kubernetes Grid creates a new VPC and NAT gateway for each Tanzu Kubernetes cluster that you deploy from that management cluster. Amazon EC2 only allows 5 NAT gateways per availability zone per account. Consequently, if you always create a new VPC for each cluster, you can only create 5 clusters in a single availability zone. If you already have five NAT gateways in use, Tanzu Kubernetes Grid is unable to provision the necessary resources when you attempt to create a new cluster. To create more than 5 clusters in a given availability zone, you must share existing VPCs, and therefore their NAT gateways, between multiple clusters.
There are 3 possible scenarios regarding VPCs and NAT gateway usage when you deploy management clusters and Tanzu Kubernetes clusters.
Create a new VPC and NAT gateway for every management cluster and Tanzu Kubernetes cluster
If you deploy a management cluster and use the option to create a new VPC, and if you make no modifications to the configuration when you deploy Tanzu Kubernetes clusters from that management cluster, the deployment of each of the Tanzu Kubernetes clusters will also create a VPC and a NAT gateway. In this scenario, you can deploy one management cluster and up to 4 Tanzu Kubernetes clusters, due to the limit of 5 NAT gateways per availability zone.
Reuse a VPC and NAT gateway that already exist in your availability zone
If a VPC already exists in the availability zone in which you are deploying a management cluster, for example a VPC that you created manually or by using tools such as CloudFormation or Terraform, you can specify that the management cluster should use this VPC. In this case, all of the Tanzu Kubernetes clusters that you deploy from that management cluster will also use the specified VPC and its NAT gateway. Because all of the Tanzu Kubernetes clusters share a single NAT gateway, the limit of 5 NAT gateways per availability zone is not exceeded and you can deploy more than 5 clusters in the availability zone.
An existing VPC must be configured with the following networking:
Create a new VPC and NAT gateway for the management cluster and deploy Tanzu Kubernetes clusters that share that VPC and NAT gateway
If you are starting with an empty availability zone, you can deploy a management cluster and use the option to create a new VPC. If you want the Tanzu Kubernetes clusters to share a VPC that Tanzu Kubernetes Grid created, must modify the cluster configuration when you deploy Tanzu Kubernetes clusters from this management cluster.
For information about how to deploy management clusters that either create or reuse a VPC, see Deploy Management Clusters to Amazon EC2 with the Installer Interface and Deploy Management Clusters to Amazon EC2 with the CLI.
For information about how to deploy Tanzu Kubernetes clusters that share a VPC that Tanzu Kubernetes Grid created when you deployed the management cluster, see Deploy a Cluster that Shares a VPC with the Management Cluster.
Tanzu Kubernetes Grid uses Cluster API Provider AWS to deploy clusters to Amazon EC2. Cluster API Provider AWS requires the
clusterawsadm command line utility to be present on your system.
clusterawsadm command line utility assists with identity and access management (IAM) for Cluster API Provider AWS. Tanzu Kubernetes Grid uses Cluster API Provider AWS to deploy clusters to Amazon EC2.
clusterawsadm utility takes the credentials that you set as environment variables and uses them to create a CloudFormation stack in your AWS account with the correct IAM resources. Tanzu Kubernetes Grid uses the resources of the CloudFormation stack to create management and Tanzu Kubernetes clusters. The IAM resources are added to the control plane and node roles when they are created during cluster deployment. For more information about CloudFormation stacks, see Working with Stacks in the AWS documentation.
NOTE: This procedure assumes that you are installing Tanzu Kubernetes Grid 1.1.3. In version 1.1.3, the version of
v0.5.4-vmware.2. In 1.1.2 it is
v0.5.4-vmware.1 and in 1.1.0 it is
Create the following environment variables for your AWS account.
Your AWS access key:
Your AWS access key secret:
If you use multi-factor authentication, your AWS session token.
The AWS region in which to deploy the cluster.
For example, set the region to
For the full list of AWS regions, see AWS Service Endpoints. In Tanzu Kubernetes Grid 1.1.2 and later, in addition to the regular AWS regions, you can also specify the
us-gov-west regions in AWS GovCloud.
Scroll to the clusterawsadm Account Preparation Tool entries and click the Download Now button for the executable for your platform.
Use either the
gunzip command or the extraction tool of your choice to unpack the binary that corresponds to the OS of your bootstrap environment:
The resulting files are
Rename the binary for your platform to
clusterawsadm, make sure that it is executable, and add it to your
Mac OS and Linux platforms:
/usr/local/binfolder and rename it to
mv ./clusterawsadm-linux-amd64-v0.5.4-vmware.2 /usr/local/bin/clusterawsadm
mv ./gunzip clusterawsadm-darwin-amd64-v0.5.4-vmware.2 /usr/local/bin/clusterawsadm
chmod +x /usr/local/bin/clusterawsadm
Program Files\clusterawsadmfolder and copy the
clusterawsadm-windows-amd64-v0.5.4-vmware.2binary into it.
clusterawsadmfolder, select Properties > Security, and make sure that your user account has the Full Control permission.
Pathrow under System variables, and click Edit.
clusterawsadmcommand to create a CloudFoundation stack.
clusterawsadm alpha bootstrap create-stack
You only need to run
clusterawsadm once per account. The CloudFormation stack that is created is not specific to any region.
In order for Tanzu Kubernetes Grid VMs to launch on Amazon EC2, you must provide the public key part of an SSH key pair to Amazon EC2 for every region you would like to deploy a management cluster.
NOTE: AWS only supports RSA keys. The keys required by AWS are of a different format to those required by vSphere. You cannot use the same key pair for both vSphere and AWS deployments.
If you do not already have an SSH key pair, you can use the AWS CLI to create one, by performing the steps below.
Create a key pair named
default and save it as
aws ec2 create-key-pair --key-name default --output json | jq .KeyMaterial -r > default.pem
After you have created the CloudFoundation stack, you must set your AWS credentials as environment variables. Cluster API Provider AWS needs these variables so that it can write the credentials into cluster manifests when it creates clusters. You must perform the steps in Install the
clusterawsadm Utility and Set Up a CloudFormation Stack before you perform these steps.
export AWS_CREDENTIALS=$(aws iam create-access-key --user-name bootstrapper.cluster-api-provider-aws.sigs.k8s.io --output json)
export AWS_ACCESS_KEY_ID=$(echo $AWS_CREDENTIALS | jq .AccessKey.AccessKeyId -r)
export AWS_SECRET_ACCESS_KEY=$(echo $AWS_CREDENTIALS | jq .AccessKey.SecretAccessKey -r)
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm alpha bootstrap encode-aws-credentials)
Your environment is now ready for you to deploy the Tanzu Kubernetes Grid management cluster to Amazon EC2.