Before you can use the Tanzu Kubernetes Grid CLI or installer interface to deploy a management cluster, you must prepare the machine on which you run the Tanzu Kubernetes Grid CLI and set up your Amazon Web Services Account (AWS) account.
If you are installing Tanzu Kubernetes Grid on VMware Cloud on AWS, you are installing to a vSphere environment. See Preparing VMware Cloud on AWS in Prepare a vSphere Management as a Service Infrastructure to prepare your environment, and Deploy Management Clusters to vSphere to deploy management clusters.
~/.tkg/bom/and its name includes the Tanzu Kubernetes Grid version, for example
imageRepositoryvalues to find their CNAMEs, for example
registry.tkg.vmware.runrequires network access to
jq installed locally.
The AWS CLI uses
jq to process JSON when creating SSH key pairs. It is also used to prepare the environment or configuration variables when you deploy Tanzu Kubernetes Grid by using the CLI.
*Or see Deploying Tanzu Kubernetes Grid in an Internet-Restricted Environment for installing without external network access.
For each cluster that you create, Tanzu Kubernetes Grid provisions a set of resources in your AWS account.
For development management clusters that are not configured for high availability, Tanzu Kubernetes Grid provisions the following resources:
For production management clusters, which are configured for high availability, Tanzu Kubernetes Grid provisions the following resources to support distribution across three availability zones:
AWS implements a set of default limits or quotas on these types of resources and allows you to modify the limits. Typically, the default limits are sufficient to get started creating clusters from Tanzu Kubernetes Grid. However, as you increase the number of clusters you are running or the workloads on your clusters, you will encroach on these limits. When you reach the limits imposed by AWS, any attempts to provision that type of resource fail. As a result, Tanzu Kubernetes Grid will be unable to create a new cluster, or you might be unable to create additional deployments on your existing clusters. Therefore, regularly assess the limits you have specified in AWS account and adjust them as necessary to fit your business needs.
For information about the sizes of cluster node instances, see Amazon EC2 Instance Types in the AWS documentation.
If you create a new Virtual Private Cloud (VPC) when you deploy a management cluster, Tanzu Kubernetes Grid also creates a dedicated NAT gateway for the management cluster or if you deploy a production management cluster, three NAT gateways, one in each of the availability zones. In this case, by default, Tanzu Kubernetes Grid creates a new VPC and one or three NAT gateways for each Tanzu Kubernetes cluster that you deploy from that management cluster. By default, AWS allows 5 NAT gateways per availability zone per account. Consequently, if you always create a new VPC for each cluster, you can create only 5 development clusters in a single availability zone. If you already have five NAT gateways in use, Tanzu Kubernetes Grid is unable to provision the necessary resources when you attempt to create a new cluster. If you do not want to change the default quotas, to create more than 5 development clusters in a given availability zone, you must share existing VPCs, and therefore their NAT gateways, between multiple clusters.
There are 3 possible scenarios regarding VPCs and NAT gateway usage when you deploy management clusters and Tanzu Kubernetes clusters.
Create a new VPC and NAT gateway(s) for every management cluster and Tanzu Kubernetes cluster
If you deploy a management cluster and use the option to create a new VPC and if you make no modifications to the configuration when you deploy Tanzu Kubernetes clusters from that management cluster, the deployment of each of the Tanzu Kubernetes clusters also creates a VPC and one or three NAT gateways. In this scenario, you can deploy one development management cluster and up to 4 development Tanzu Kubernetes clusters, due to the default limit of 5 NAT gateways per availability zone.
Reuse a VPC and NAT gateway(s) that already exist in your availability zone(s)
If a VPC already exists in the availability zone(s) in which you are deploying a management cluster, for example a VPC that you created manually or by using tools such as CloudFormation or Terraform, you can specify that the management cluster should use this VPC. In this case, all of the Tanzu Kubernetes clusters that you deploy from that management cluster also use the specified VPC and its NAT gateway(s).
An existing VPC must be configured with the following networking:
Create a new VPC and NAT gateway(s) for the management cluster and deploy Tanzu Kubernetes clusters that share that VPC and NAT gateway(s)
If you are starting with an empty availability zone(s), you can deploy a management cluster and use the option to create a new VPC. If you want the Tanzu Kubernetes clusters to share a VPC that Tanzu Kubernetes Grid created, you must modify the cluster configuration when you deploy Tanzu Kubernetes clusters from this management cluster.
For information about how to deploy management clusters that either create or reuse a VPC, see Deploy Management Clusters to Amazon EC2 with the Installer Interface and Deploy Management Clusters to Amazon EC2 with the CLI.
For information about how to deploy Tanzu Kubernetes clusters that share a VPC that Tanzu Kubernetes Grid created when you deployed the management cluster, see Deploy a Cluster that Shares a VPC with the Management Cluster.
To enable Tanzu Kubernetes Grid VMs to launch on Amazon EC2, you must provide the public key part of an SSH key pair to Amazon EC2 for every region in which you plan to deploy management clusters.
NOTE: AWS supports only RSA keys. The keys required by AWS are of a different format to those required by vSphere. You cannot use the same key pair for both vSphere and AWS deployments.
If you do not already have an SSH key pair, you can create one by performing the steps below:
Set the following environment variables for your AWS account:
export AWS_ACCESS_KEY_ID=aws_access_key, where
aws_access_key is your AWS access key.
export AWS_SECRET_ACCESS_KEY=aws_access_key_secret, where
aws_access_key_secret is your AWS access key secret.
(Multi-factor authentication only)
export AWS_SESSION_TOKEN=aws_session_token, where
aws_session_token is your AWS session token. Set this variable if you use multi-factor authentication.
export AWS_REGION=aws_region, where
aws_region is the AWS region in which you intend to deploy the cluster. For example,
For the full list of AWS regions, see AWS Service Endpoints. In addition to the regular AWS regions, you can also specify the
us-gov-west regions in AWS GovCloud.
For each region that you plan to use with Tanzu Kubernetes Grid, create a key pair named
default and save it as
aws ec2 create-key-pair --key-name default --output json | jq .KeyMaterial -r > default.pem
To create a key pair for a region that is not the default in your profile, or set locally as
AWS_DEFAULT_REGION, include the
Log in to your Amazon EC2 dashboard and go to Network & Security > Key Pairs to verify that the created key pair is registered with your account.
If both of the following are true, you must add the
kubernetes.io/cluster/YOUR-CLUSTER-NAME=shared tag to the public subnet or subnets that you intend to use for the management cluster:
LoadBalancerin the management cluster.
kubernetes.io/cluster/YOUR-CLUSTER-NAME=shared tag to the public subnet or subnets enables you to create services of type
LoadBalancer in the management cluster. To add this tag, follow the steps below:
Gather the ID or IDs of the public subnet or subnets within your existing VPC that you want to use for the management cluster. To deploy a
prod management cluster, you must provide three subnets.
Create the required tag by running the following command:
aws ec2 create-tags --resources YOUR-PUBLIC-SUBNET-ID-OR-IDS --tags Key=kubernetes.io/cluster/YOUR-CLUSTER-NAME,Value=shared
YOUR-PUBLIC-SUBNET-ID-OR-IDSis the ID or IDs of the public subnet or subnets that you gathered in the previous step.
YOUR-CLUSTER-NAMEis the name of the management cluster that you want to deploy.
aws ec2 create-tags --resources subnet-00bd5d8c88a5305c6 subnet-0b93f0fdbae3436e8 subnet-06b29d20291797698 --tags Key=kubernetes.io/cluster/my-management-cluster,Value=shared
If you want to use services of type
LoadBalancer in a Tanzu Kubernetes cluster after you deploy the cluster to a VPC that was not created by Tanzu Kubernetes Grid, follow the tagging instructions in Deploy a Cluster to an Existing VPC and Add Subnet Tags (Amazon EC2).
Your environment is now ready for you to deploy the management cluster to Amazon EC2.
Deploy Management Clusters to Amazon EC2 with the CLI. This is the more complicated method that allows greater flexibility of configuration.
NOTE: If in Tanzu Kubernetes Grid v1.1 you set
AWS_B64ENCODED_CREDENTIALS as an environment variable, unset the variable before deploying management clusters with v1.2 of the CLI. In v1.2 and later, Tanzu Kubernetes Grid calculates the value of
AWS_B64ENCODED_CREDENTIALS automatically. To enable Tanzu Kubernetes Grid to calculate this value, you must set the
AWS_REGION variables in
.tkg/config.yaml or as environment variables. See Create the Cluster Configuration File in Deploy Management Clusters to Amazon EC2 with the CLI.
If you want to deploy clusters to vSphere and Azure as well as to Amazon EC2, see Deploy Management Clusters to vSphere and Deploy Management Clusters to Microsoft Azure for the required setup for those platforms.