This topic explains how to prepare Amazon Web Services (AWS) for running Tanzu Kubernetes Grid.
Before you can use the Tanzu CLI or installer interface to deploy a management cluster, you must prepare the bootstrap machine on which you run the Tanzu CLI and set up your Amazon Web Services Account (AWS) account.
If you are installing Tanzu Kubernetes Grid on VMware Cloud on AWS, you are installing to a vSphere environment. See Preparing VMware Cloud on AWS in Prepare to Deploy Management Clusters to a VMware Cloud Environment to prepare your environment, and Prepare to Deploy Management Clusters to vSphere to deploy management clusters.
ImportantTanzu Kubernetes Grid v2.4.x is the last version of TKG that supports the creation of standalone TKG management clusters on AWS. The ability to create standalone TKG management clusters on AWS will be removed in the Tanzu Kubernetes Grid v2.5 release.
Starting from now, VMware recommends that you use Tanzu Mission Control to create native AWS EKS clusters instead of creating new TKG management clusters on AWS. For information about how to create native AWS EKS clusters with Tanzu Mission Control, see Managing the Lifecycle of AWS EKS Clusters in the Tanzu Mission Control documentation.
For more information, see Deprecation of TKG Management and Workload Clusters on AWS and Azure in the VMware Tanzu Kubernetes Grid v2.4 Release Notes.
CLUSTER_API_SERVER_PORT
variable when deploying the cluster.Traffic is allowed between your local bootstrap machine and the image repositories listed in the management cluster Bill of Materials (BOM) file, over port 443, for TCP.*
~/.config/tanzu/tkg/bom/
and its name includes the Tanzu Kubernetes Grid version, for example tkg-bom-v2.3.1+vmware.1.yaml
for v2.3.1.imageRepository
values, for example projects.registry.vmware.com/tkg
, to find the CNAMEs to allow access to.The AWS CLI installed locally.
jq
installed locally.
The AWS CLI guide uses jq
to process JSON when creating SSH key pairs. It is also used to prepare the environment or configuration variables when you deploy Tanzu Kubernetes Grid by using the CLI.
To work around a known issue with CAPA, set EXP_EXTERNAL_RESOURCE_GC=false
in your local environment or in the management cluster configuration file.
*Or see Prepare an Internet-Restricted Environment for installing without external network access.
For each cluster that you create, Tanzu Kubernetes Grid uses a set of resources in your AWS account.
Resource | Tanzu Kubernetes Grid Uses |
---|---|
VPC | 1 |
Elastic IP Addresses | 1 per Availability Zone* (AZ) |
Subnets | 2 per AZ for internet-facing, 1 per AZ for internal* |
EC2 Security Groups (VPC) | 4 |
Internet Gateways | 1 |
NAT Gateway | 1 per AZ for default* deployments |
Control plane EC2 instances | 1 per AZ* |
*Development clusters use one Availability Zone, and Production clusters use 3.
AWS implements default limits or quotas on these resources, and allows you to modify the limits. The default limits are typically sufficient to let you start creating and using clusters, but as you increase the cluster or workload count, you may exceed these AWS limits, which prevents Tanzu Kubernetes Grid from creating new clusters or deploying new workloads.
VMware recommends that you regularly assess the limits you have specified in AWS account and request service quota increases as necessary to fit your business needs.
The most relevant service quotas are:
Service Code | Quota Name | Quota Code |
---|---|---|
vpc | Internet gateways per region | L-A4707A72 |
vpc | NAT gateways per Availability Zone | L-FE5A380F |
vpc | VPCs per Region | L-F678F1CE |
ec2 | Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances | L-1216C47A |
ec2 | EC2-VPC Elastic IPs | L-0263D0A3 |
elasticloadbalancing | Classic Load Balancers per Region | L-E9E9831D |
You can optionally share VPCs rather than creating new ones, such as a workload cluster sharing a VPC with its management cluster.
For information about the sizes of cluster node instances, see Amazon EC2 Instance Types in the AWS documentation.
Tanzu Kubernetes Grid can be deployed to VPCs without NAT gateways attached if either of the following conditions are met:
Tanzu Kubernetes Grid does not check for the existence of specific routes in routing tables. Therefore, internet connectivity via either VPC NAT Gateways, EC2 instances or appliances, internet gateways, transit gateways, or DirectConnect is supported.
The table below describes sizing examples for management clusters on AWS. Use this data as guidance to ensure your management cluster is scaled to handle the number of workload clusters that you plan to deploy. The Workload cluster VM size column lists the VM sizes that were used for the examples in the Can manage… column.
Management cluster plan | Management cluster VM size | Can manage… | Workload cluster VM size |
---|---|---|---|
3 control plane nodes and 3 worker nodes |
|
Examples:
|
|
3 control plane nodes and 3 worker nodes |
|
Example: One workload cluster, deployed with 3 control plane and 500 worker nodes |
|
3 control plane nodes and 3 worker nodes |
|
Example: 200 workload clusters, each cluster deployed with 3 control plane and 5 worker nodes |
|
The following sections list the permissions that Tanzu Kubernetes Grid needs to deploy and manage clusters on AWS:
tanzu mc permissions aws set
command or select the Automate creation of AWS CloudFormation Stack checkbox in the installer interface.
mc
is short for management-cluster
.To enable Tanzu Kubernetes Grid to deploy clusters to AWS, you must create a CloudFormation stack, tkg-cloud-vmware-com
, in your AWS account. This CloudFormation stack defines the identity and access management (IAM) resources and permissions that Tanzu Kubernetes Grid needs to deploy and manage clusters on AWS.
To create the stack, after completing the steps in this topic, you run the tanzu mc permissions aws set
command before deploying the cluster or select the Automate creation of AWS CloudFormation Stack checkbox in the installer interface, as described in the Create IAM Resources section of Deploy Management Clusters from a Configuration File or Deploy Management Clusters with the Installer Interface. You need to perform this operation only once per AWS account, regardless of whether you intend to use a single or multiple AWS regions for your Tanzu Kubernetes Grid environment.
Running tanzu mc permissions aws set
or selecting Automate creation of AWS CloudFormation Stack creates the following IAM profiles, roles, and policies. The user whose credentials you provide to Tanzu Kubernetes Grid when creating the stack must have administrator permissions to create these IAM resources in your AWS account.
Profiles / AWS::IAM::InstanceProfile
:
control-plane.tkg.cloud.vmware.com
: used by EC2 instances running Kubernetes control plane components.nodes.tkg.cloud.vmware.com
: used by all non control-plane EC2 instances in Tanzu Kubernetes Grid clusters.controllers.tkg.cloud.vmware.com
: can be used for bootstrapping Tanzu Kubernetes Grid from a jump box in EC2 with the instance profile attached.Roles / AWS::IAM::Role
: these map 1:1 to the AWS IAM instance profiles above:
control-plane.tkg.cloud.vmware.com
nodes.tkg.cloud.vmware.com
controllers.tkg.cloud.vmware.com
Policies / AWS::IAM::ManagedPolicy
:
arn:aws:iam::YOUR-ACCOUNT-ID:policy/control-plane.tkg.cloud.vmware.com
: attached to the Control Plane IAM role above.arn:aws:iam::YOUR-ACCOUNT-ID:policy/nodes.tkg.cloud.vmware.com
: attached to the Control Plane and Nodes IAM roles above.arn:aws:iam::YOUR-ACCOUNT-ID:policy/controllers.tkg.cloud.vmware.com
: attached to the Control Plane and Controllers IAM roles above.After Tanzu Kubernetes Grid creates the CloudFormation stack in your AWS account, you can retrieve its template by navigating to CloudFormation > Stacks in the AWS console. For more information about CloudFormation stacks, see Working with Stacks in the AWS documentation.
Alternatively, rather than running tanzu mc permissions aws set
or selecting Automate creation of AWS CloudFormation Stack, you can retrieve the template that Tanzu Kubernetes Grid uses to create the above IAM resources and create the tkg-cloud-vmware-com
stack directly in the AWS account. To retrieve the template, run tanzu mc permissions aws generate-cloudformation-template
.
The IAM permissions below are required only if you intend to register your management cluster with Tanzu Mission Control. These permissions are added to the nodes.tkg.cloud.vmware.com
IAM role automatically when you run the tanzu mc permissions aws set
command.
{
"Action": [
"servicequotas:ListServiceQuotas",
"ec2:DescribeKeyPairs",
"ec2:DescribeInstanceTypeOfferings",
"ec2:DescribeInstanceTypes",
"ec2:DescribeAvailabiilityZones",
"ec2:DescribeRegions",
"ec2:DescribeSubnets",
"ec2:DescribeRouteTables",
"ec2:DescribeVpcs",
"ec2:DescribeNatGateways",
"ec2:DescribeAddresses",
"elasticloadbalancing:DescribeLoadBalancers"
],
"Resource": [
"*"
],
"Effect": "Allow"
}
You can deactivate these permissions by setting DISABLE_TMC_CLOUD_PERMISSIONS
to true
before running tanzu mc permissions aws set
. For more information, see Register Your Management Cluster with Tanzu Mission Control.
You must set the following permissions manually, once per AWS account.
If you intend to deploy the management cluster from the installer interface, the user whose credentials you provide to Tanzu Kubernetes Grid when deploying the cluster must have the "ec2:DescribeInstanceTypeOfferings"
and "ec2:DescribeInstanceTypes"
permissions. If your user does not currently have these permissions, you can create a custom policy that includes the permissions and attach it to your user.
To enable Tanzu Kubernetes Grid to create and manage VMs on Amazon EC2, it must have an AWS account along with:
You must set your account credentials to create an SSH key pair for the region where you plan to deploy Tanzu Kubernetes Grid clusters.
You have several options for configuring the AWS account credentials used to access EC2. The subsections below explain the different options.
VMware recommends the Credential Profiles option, managed with the aws configure
command. These profiles save to a local shared configuration file, typically in ~/.aws/config
.
In descending order of precedence when more than one is used, possible credential configuration options are:
Workload cluster configuration file variables (only for workload cluster creation). See Cluster Configuration File, below.
Local, static environment variables. See Local Environment Variables, below.
(Recommended) Credential Profiles, which are updated when you run aws configure
. These save to a file located at ~/.aws/config
on Linux or macOS, or at C:\Users\USERNAME\.aws\config
on Windows, or it can be named credentials
. See Credential Profiles, below.
Instance profile credentials. You can associate an IAM role with each of your Amazon Elastic Compute Cloud (Amazon EC2) instances. Temporary credentials for that role are then available to code running in the instance. The credentials are delivered through the Amazon EC2 metadata service. For more information, see IAM Roles for AWS in the Amazon EC2 User Guide for Linux Instances and Using Instance Profiles in the IAM User Guide.
One option for configuring AWS credentials is to set local environment variables on your bootstrap machine. To use local environment variables, set the following environment variables for your AWS account:
export AWS_ACCESS_KEY_ID=aws_access_key
, where aws_access_key
is your AWS access key.
export AWS_SECRET_ACCESS_KEY=aws_access_key_secret
, where aws_access_key_secret
is your AWS access key secret.
export AWS_SESSION_TOKEN=aws_session_token
, where aws_session_token
is the AWS session token granted to your account. You only need to specify this variable if you are required to use a temporary access key. For more information about using temporary access keys, see Understanding and getting your AWS credentials.
export AWS_REGION=aws_region
, where aws_region
is the AWS region in which you intend to deploy the cluster. For example, us-west-2
.
For the full list of AWS regions, see AWS Service Endpoints. In addition to the regular AWS regions, you can also specify the us-gov-east
and us-gov-west
regions in AWS GovCloud.
Tanzu Kubernetes Grid supports the following AWS CLI environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
AWS_SHARED_CREDENTIALS_FILE
AWS_CONFIG_FILE
AWS_REGION
AWS_PROFILE
Further information on these are available in the AWS Command Line Interface User Guide.
The recommended method for working with Tanzu Kubernetes Grid is to use the aws configure
command to store AWS credentials in a local credential or configuration file. An AWS configuration file can support a range of authentication mechanisms, ranging from static credentials to SSO via external credential helpers.
An AWS credential file can serve multiple accounts, including both a default profile and additional named profiles. The credential files and profiles are applied after local environment variables as part of the credential precedence above.
To set up credentials files and profiles for your AWS account on the bootstrap machine, you can use the aws configure CLI command.
To customize which AWS credential files and profiles to use, you can set the following environment variables:
export AWS_PROFILE=profile_name
where profile_name
is the profile name that contains the AWS access key you want to use. If you do not specify a value for this variable, the profile name default
is used. For more information about using named profiles, see Named profiles in AWS documentation.
export AWS_SHARED_CREDENTIAL_FILE=path_to_credentials_file
where path_to_credentials_file
is the location and name of the credentials file that contains your AWS access key information. If you do not define this environment variable, the default location and filename is $HOME/.aws/credentials
.
export AWS_CONFIG=path_to_config_file
where path_to_config_file
is the location and name of the config file that contains profile configuration. If you do not define this environment variable, the default location and filename is $HOME/.aws/config
. If you are specifying explicit credential sources, or using external credential processes, you will want to use this file.
NoteAny named profiles that you create in your AWS credentials or config files appear as selectable options in the AWS Credential Profile drop-down in the Tanzu Kubernetes Grid Installer UI for AWS.
For more information about working AWS credentials and the default AWS credential provider chain, see Best practices for managing AWS access keys in the AWS documentation.
You can specify AWS credentials in the configuration file used to create a cluster, by setting the following variables:
AWS_ACCESS_KEY_ID
(base 64 encoded)AWS_SECRET_ACCESS_KEY
(base 64 encoded)AWS_SESSION_TOKEN
(optional)AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
must be passed in the form <encoded:(base64 encoded value)>
, e.g. if your AWS access key ID is AKIAIOSFODNN7EXAMPLE
, secret access key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
and session token AQoEXAMPLEH4aoAH0gNCAPyJxz4BlCFFxWNE1OP
, then this must be set in the configuration file as:
AWS_ACCESS_KEY_ID: <encoded:QUtJQUlPU0ZPRE5ON0VYQU1QTEU=>
AWS_SECRET_ACCESS_KEY: <encoded:d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQ==>
AWS_SESSION_TOKEN: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Earlier versions of Tanzu Kubernetes Grid would save the credentials in a cluster configuration file. Tanzu Kubernetes Grid 1.4 and above do not save credentials for AWS in configuration files by default.
After you have set your AWS account credentials using either local environment variables or in a credentials file and profile, you must generate an EC2 key pair for your AWS account. Tanzu Kubernetes Grid passes the public key part of this key pair to AWS in order to authenticate within each region.
NoteAWS supports only RSA keys. The keys required by AWS are of a different format to those required by vSphere. You cannot use the same key pair for both vSphere and AWS deployments.
If you do not already have an EC2 key pair for the account and region you are using to deploy the management cluster, create one by performing the steps below:
For each region that you plan to use with Tanzu Kubernetes Grid, create a named key pair, and output a .pem
file that includes the name. For example, the following command uses default
and saves the file as default.pem
.
aws ec2 create-key-pair --key-name default --output json | jq .KeyMaterial -r > default.pem
To create a key pair for a region that is not the default in your profile, or set locally as AWS_DEFAULT_REGION
, include the --region
option.
Log in to your Amazon EC2 dashboard and go to Network & Security > Key Pairs to verify that the created key pair is registered with your account.
For production deployments, it is strongly recommended to enable identity management for your clusters:
If you are using Tanzu Kubernetes Grid in an environment with an external internet connection, once you have set up identity management, you are ready to deploy management clusters to AWS.