Prepare to Deploy Management Clusters to AWS

This topic explains how to prepare Amazon Web Services (AWS) for running Tanzu Kubernetes Grid.

Before you can use the Tanzu CLI or installer interface to deploy a management cluster, you must prepare the bootstrap machine on which you run the Tanzu CLI and set up your Amazon Web Services Account (AWS) account.

If you are installing Tanzu Kubernetes Grid on VMware Cloud on AWS, you are installing to a vSphere environment. See Preparing VMware Cloud on AWS in Prepare a vSphere Management as a Service Infrastructure to prepare your environment, and Prepare to Deploy Management Clusters to vSphere to deploy management clusters.

General Requirements

  • The Tanzu CLI installed locally. See Install the Tanzu CLI and Other Tools.
  • You have an active AWS account with:
    • Access key and access key secret. For more information about access keys, see AWS Account and Access Keys in the AWS documentation.
    • Permissions described in Required AWS Permissions. These permissions enable Tanzu Kubernetes Grid to create and manage clusters on AWS.
  • Your AWS account has sufficient resource quotas for the following. For more information, see Amazon VPC Quotas in the AWS documentation and Resource Usage in Your Amazon Web Services Account below:
    • Virtual Private Cloud (VPC) instances. By default, each management cluster that you deploy creates one VPC and one or three NAT gateways. The default NAT gateway quota is 5 instances per availability zone, per account.
    • Elastic IP (EIP) addresses. The default EIP quota is 5 EIP addresses per region, per account.
  • Traffic is allowed between your local bootstrap machine and port 6443 of all VMs in the clusters you create. Port 6443 is where the Kubernetes API is exposed by default. To change this port for a management or a workload cluster, set the CLUSTER_API_SERVER_PORT variable when deploying the cluster.
  • Traffic is allowed between your local bootstrap machine and the image repositories listed in the management cluster Bill of Materials (BOM) file, over port 443, for TCP.*

    • The BOM file is under ~/.config/tanzu/tkg/bom/ and its name includes the Tanzu Kubernetes Grid version, for example tkg-bom-1.6.1+vmware.1.yaml for v1.6.1.
    • Run a DNS lookup on all imageRepository values, for example projects.registry.vmware.com/tkg, to find the CNAMEs to allow access to.
  • The AWS CLI installed locally.

  • jq installed locally.

    The AWS CLI guide uses jq to process JSON when creating SSH key pairs. It is also used to prepare the environment or configuration variables when you deploy Tanzu Kubernetes Grid by using the CLI.

*Or see Prepare an Internet-Restricted Environment for installing without external network access.

Resource Usage in Your AWS Account

For each cluster that you create, Tanzu Kubernetes Grid provisions a set of resources in your AWS account.

Resource Tanzu Kubernetes Grid Uses
VPC 1
Elastic IP Addresses 1 per Availability Zone* (AZ)
Subnets 2 per AZ for internet-facing, 1 per AZ for internal*
EC2 Security Groups (VPC) 4
Internet Gateways 1
NAT Gateway 1 per AZ for default* deployments
Control plane EC2 instances 1 per AZ*

*Development clusters use one Availability Zone, and Production clusters use 3.

AWS implements default limits or quotas on these resources, and allows you to modify the limits. The default limits are typically sufficient to let you start creating and using clusters, but as you increase the cluster or workload count, you may exceed these AWS limits, which prevents Tanzu Kubernetes Grid from creating new clusters or deploying new workloads.

VMware recommends that you regularly assess the limits you have specified in AWS account and request service quota increases as necessary to fit your business needs.

The most relevant service quotas are:

Service Code Quota Name Quota Code
vpc Internet gateways per region L-A4707A72
vpc NAT gateways per Availability Zone L-FE5A380F
vpc VPCs per Region L-F678F1CE
ec2 Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances L-1216C47A
ec2 EC2-VPC Elastic IPs L-0263D0A3
elasticloadbalancing Classic Load Balancers per Region L-E9E9831D

You can optionally share VPCs rather than creating new ones, such as a workload cluster sharing a VPC with its management cluster.

For information about the sizes of cluster node instances, see Amazon EC2 Instance Types in the AWS documentation.

Amazon VPCs (Virtual Private Clouds) and NAT Gateway Limits

If you create a new Virtual Private Cloud (VPC) when you deploy a development management cluster, Tanzu creates a dedicated NAT gateway for the management cluster. If you deploy a production management cluster, Tanzu creates three NAT gateways, one in each of the availability zones. In this case, by default, Tanzu Kubernetes Grid creates a new VPC and one or three NAT gateways for each workload cluster that you deploy from that management cluster. By default, AWS allows five NAT gateways per availability zone per account. Consequently, if you always create a new VPC for each cluster, you can create only five development clusters in a single availability zone. If you already have five NAT gateways in use, Tanzu Kubernetes Grid is unable to provision the necessary resources when you attempt to create a new cluster. If you do not want to change the default quotas, to create more than five development clusters in a given availability zone, you must share existing VPCs, and therefore their NAT gateways, between multiple clusters.

There are three possible scenarios regarding VPCs and NAT gateway usage when you deploy management clusters and Tanzu Kubernetes clusters.

  • Create a new VPC and NAT gateway(s) for every management cluster and workload cluster

    If you deploy a management cluster and use the option to create a new VPC and if you make no modifications to the configuration when you deploy workload clusters from that management cluster, the deployment of each of the Tanzu Kubernetes clusters also creates a VPC and one or three NAT gateways. In this scenario, you can deploy one development management cluster and up to 4 development workload clusters, due to the default limit of 5 NAT gateways per availability zone.

  • Reuse a VPC and NAT gateway(s) that already exist in your availability zone(s)

    If a VPC already exists in the availability zone(s) in which you are deploying a management cluster, for example a VPC that you created manually or by using tools such as AWS CloudFormation or Terraform, you can specify that the management cluster should use this VPC. In this case, all of the workload clusters that you deploy from that management cluster also use the specified VPC and its NAT gateway(s).

    An existing VPC must be configured with the following networking:

    • Two subnets for development clusters or six subnets for production clusters
    • One NAT gateway for development clusters or three NAT gateways for production clusters
    • One internet gateway and corresponding routing tables
    • [Optionally] Security groups configured with appropriate rules
  • Create a new VPC and NAT gateway(s) for the management cluster and deploy workload clusters that share that VPC and NAT gateway(s)

    If you are starting with an empty availability zone(s), you can deploy a management cluster and use the option to create a new VPC. If you want the workload clusters to share a VPC that Tanzu Kubernetes Grid created, you must modify the cluster configuration when you deploy workload clusters from this management cluster.

For information about how to deploy management clusters that either create or reuse a VPC, see Deploy Management Clusters with the Installer Interface and Deploy Management Clusters to AWS CLI.

For information about how to deploy workload clusters that share a VPC that Tanzu Kubernetes Grid created when you deployed the management cluster, see Deploy a Cluster that Shares a VPC with the Management Cluster.

Amazon VPCs without NAT gateways

Tanzu Kubernetes Grid can be deployed to VPCs without NAT gateways attached as the following conditions are met:

  • You follow the workflow for deployment into an existing VPC, and either:
    • The subnets into which workload clusters are deployed into has route tables with 0.0.0.0/0 routes to the internet, or,
    • You follow the workflow for internet-restricted deployment.

When deploying into an existing VPC, Tanzu Kubernetes Grid does not check for the existence of specific routes in routing tables, therefore internet connectivity via either VPC NAT Gateways, EC2 instances or appliances, internet gateways, transit gateways, or DirectConnect is supported.

Management Cluster Sizing Examples

The table below describes sizing examples for management clusters on AWS. Use this data as guidance to ensure your management cluster is scaled to handle the number of workload clusters that you plan to deploy. The Workload cluster VM size column lists the VM sizes that were used for the examples in the Can manage… column.

Management cluster plan Management cluster VM size Can manage… Workload cluster VM size
3 control plane nodes and 3 worker nodes
  • Control plane nodes: m4.large (CPU: 2; memory: 8 GB)
  • Worker nodes: m4.large (CPU: 2; memory: 8 GB)
Examples:
  • 5 workload clusters, each cluster deployed with 3 control plane and 200 worker nodes; or
  • 10 workload clusters, each cluster deployed with 3 control plane and 50 worker nodes
  • Control plane nodes: c4.large (CPU: 2; memory: 3.75 GB)
  • Worker nodes: c4.large (CPU: 2; memory: 3.75 GB)
3 control plane nodes and 3 worker nodes
  • Control plane nodes: m4.large (CPU: 2; memory: 8 GB)
  • Worker nodes: m4.large (CPU: 2; memory: 8 GB)
Example: One workload cluster, deployed with 3 control plane and 500 worker nodes
  • Control plane nodes: c5.4xlarge (CPU: 16; memory: 32 GB)
  • Worker nodes: c5.xlarge (CPU: 4; memory: 8 GB)
3 control plane nodes and 3 worker nodes
  • Control plane nodes: m4.xlarge (CPU: 4; memory: 16 GB)
  • Worker nodes: m4.xlarge (CPU: 4; memory: 16 GB)
Example: 200 workload clusters, each cluster deployed with 3 control plane and 5 worker nodes
  • Control plane nodes: c4.large (CPU: 2; memory: 3.75 GB)
  • Worker nodes: c4.large (CPU: 2; memory: 3.75 GB)

Required AWS Permissions

The following sections list the permissions that Tanzu Kubernetes Grid needs to deploy and manage clusters on AWS:

  • Permissions Set by Tanzu Kubernetes Grid: Tanzu Kubernetes Grid sets these permissions automatically, when you run the tanzu mc permissions aws set command or select the Automate creation of AWS CloudFormation Stack checkbox in the installer interface.
    • The Tanzu CLI alias mc is short for management-cluster.
  • Permissions Set by You: You add these permissions manually.

Permissions Set by Tanzu Kubernetes Grid

To enable Tanzu Kubernetes Grid to deploy clusters to AWS, you must create a CloudFormation stack, tkg-cloud-vmware-com, in your AWS account. This CloudFormation stack defines the identity and access management (IAM) resources and permissions that Tanzu Kubernetes Grid needs to deploy and manage clusters on AWS.

To create the stack, after completing the steps in this topic, you run the tanzu mc permissions aws set command before deploying the cluster or select the Automate creation of AWS CloudFormation Stack checkbox in the installer interface, as described in the Create IAM Resources section of Deploy Management Clusters from a Configuration File or Deploy Management Clusters with the Installer Interface. You need to perform this operation only once per AWS account, regardless of whether you intend to use a single or multiple AWS regions for your Tanzu Kubernetes Grid environment.

Running tanzu mc permissions aws set or selecting Automate creation of AWS CloudFormation Stack creates the following IAM profiles, roles, and policies. The user whose credentials you provide to Tanzu Kubernetes Grid when creating the stack must have administrator permissions to create these IAM resources in your AWS account.

  • Profiles / AWS::IAM::InstanceProfile:

    • Control Planecontrol-plane.tkg.cloud.vmware.com: used by EC2 instances running Kubernetes control plane components.
    • Nodesnodes.tkg.cloud.vmware.com: used by all non control-plane EC2 instances in Tanzu Kubernetes Grid clusters.
    • Controllerscontrollers.tkg.cloud.vmware.com: can be used for bootstrapping Tanzu Kubernetes Grid from a jump box in EC2 with the instance profile attached.
  • Roles / AWS::IAM::Role: these map 1:1 to the AWS IAM instance profiles above:

    • Control Planecontrol-plane.tkg.cloud.vmware.com
    • Nodesnodes.tkg.cloud.vmware.com
    • Controllerscontrollers.tkg.cloud.vmware.com
  • Policies / AWS::IAM::ManagedPolicy:

    • arn:aws:iam::YOUR-ACCOUNT-ID:policy/control-plane.tkg.cloud.vmware.com: attached to the Control Plane IAM role above.
    • arn:aws:iam::YOUR-ACCOUNT-ID:policy/nodes.tkg.cloud.vmware.com: attached to the Control Plane and Nodes IAM roles above.
    • arn:aws:iam::YOUR-ACCOUNT-ID:policy/controllers.tkg.cloud.vmware.com: attached to the Control Plane and Controllers IAM roles above.

After Tanzu Kubernetes Grid creates the CloudFormation stack in your AWS account, you can retrieve its template by navigating to CloudFormation > Stacks in the AWS console. For more information about CloudFormation stacks, see Working with Stacks in the AWS documentation.

Alternatively, rather than running tanzu mc permissions aws set or selecting Automate creation of AWS CloudFormation Stack, you can retrieve the template that Tanzu Kubernetes Grid uses to create the above IAM resources and create the tkg-cloud-vmware-com stack directly in the AWS account. To retrieve the template, run tanzu mc permissions aws generate-cloudformation-template.

Permissions Required by Tanzu Mission Control

The IAM permissions below are required only if you intend to register your management cluster with Tanzu Mission Control. These permissions are added to the nodes.tkg.cloud.vmware.com IAM role automatically when you run the tanzu mc permissions aws set command.

{
  "Action": [
    "servicequotas:ListServiceQuotas",
    "ec2:DescribeKeyPairs",
    "ec2:DescribeInstanceTypeOfferings",
    "ec2:DescribeInstanceTypes",
    "ec2:DescribeAvailabiilityZones",
    "ec2:DescribeRegions",
    "ec2:DescribeSubnets",
    "ec2:DescribeRouteTables",
    "ec2:DescribeVpcs",
    "ec2:DescribeNatGateways",
    "ec2:DescribeAddresses",
    "elasticloadbalancing:DescribeLoadBalancers"
  ],
  "Resource": [
    "*"
  ],
  "Effect": "Allow"
}

You can deactivate these permissions by setting DISABLE_TMC_CLOUD_PERMISSIONS to true before running tanzu mc permissions aws set. For more information, see Register Your Management Cluster with Tanzu Mission Control.

Permissions Set by You

You must set the following permissions manually, once per AWS account.

Installer Interface

If you intend to deploy the management cluster from the installer interface, the user whose credentials you provide to Tanzu Kubernetes Grid when deploying the cluster must have the "ec2:DescribeInstanceTypeOfferings" and "ec2:DescribeInstanceTypes" permissions. If your user does not currently have these permissions, you can create a custom policy that includes the permissions and attach it to your user.

Configure AWS Account Credentials

To enable Tanzu Kubernetes Grid to create and manage VMs on Amazon EC2, it must have an AWS account along with:

  • Credentials for the AWS account
  • An SSH key pair registered with the account for every AWS region in which you plan to deploy management clusters

You must set your account credentials to create an SSH key pair for the region where you plan to deploy Tanzu Kubernetes Grid clusters.

You have several options for configuring the AWS account credentials used to access EC2. The subsections below explain the different options.

VMware recommends the Credential Profiles option, managed with the aws configure command. These profiles save to a local shared configuration file, typically in ~/.aws/config.

In descending order of precedence when more than one is used, possible credential configuration options are:

  1. Workload cluster configuration file variables (only for workload cluster creation). See Cluster Configuration File, below.

  2. Local, static environment variables. See Local Environment Variables, below.

  3. (Recommended) Credential Profiles, which are updated when you run aws configure. These save to a file located at ~/.aws/config on Linux or macOS, or at C:\Users\USERNAME\.aws\config on Windows, or it can be named credentials. See Credential Profiles, below.

  4. Instance profile credentials. You can associate an IAM role with each of your Amazon Elastic Compute Cloud (Amazon EC2) instances. Temporary credentials for that role are then available to code running in the instance. The credentials are delivered through the Amazon EC2 metadata service. For more information, see IAM Roles for AWS in the Amazon EC2 User Guide for Linux Instances and Using Instance Profiles in the IAM User Guide.

Local Environment Variables

One option for configuring AWS credentials is to set local environment variables on your bootstrap machine. To use local environment variables, set the following environment variables for your AWS account:

  • export AWS_ACCESS_KEY_ID=aws_access_key, where aws_access_key is your AWS access key.

  • export AWS_SECRET_ACCESS_KEY=aws_access_key_secret, where aws_access_key_secret is your AWS access key secret.

  • export AWS_SESSION_TOKEN=aws_session_token, where aws_session_token is the AWS session token granted to your account. You only need to specify this variable if you are required to use a temporary access key. For more information about using temporary access keys, see Understanding and getting your AWS credentials.

  • export AWS_REGION=aws_region, where aws_region is the AWS region in which you intend to deploy the cluster. For example, us-west-2.

    For the full list of AWS regions, see AWS Service Endpoints. In addition to the regular AWS regions, you can also specify the us-gov-east and us-gov-west regions in AWS GovCloud.

Tanzu Kubernetes Grid supports the following AWS CLI environment variables:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_SESSION_TOKEN
  • AWS_SHARED_CREDENTIALS_FILE
  • AWS_CONFIG_FILE
  • AWS_REGION
  • AWS_PROFILE

Further information on these are available in the AWS Command Line Interface User Guide.

Credential Profiles

The recommended method for working with Tanzu Kubernetes Grid is to use the aws configure command to store AWS credentials in a local credential or configuration file. An AWS configuration file can support a range of authentication mechanisms, ranging from static credentials to SSO via external credential helpers.

An AWS credential file can serve multiple accounts, including both a default profile and additional named profiles. The credential files and profiles are applied after local environment variables as part of the credential precedence above.

To set up credentials files and profiles for your AWS account on the bootstrap machine, you can use the aws configure CLI command.

To customize which AWS credential files and profiles to use, you can set the following environment variables:

  • export AWS_PROFILE=profile_name where profile_name is the profile name that contains the AWS access key you want to use. If you do not specify a value for this variable, the profile name default is used. For more information about using named profiles, see Named profiles in AWS documentation.

  • export AWS_SHARED_CREDENTIAL_FILE=path_to_credentials_file where path_to_credentials_file is the location and name of the credentials file that contains your AWS access key information. If you do not define this environment variable, the default location and filename is $HOME/.aws/credentials.

  • export AWS_CONFIG=path_to_config_file where path_to_config_file is the location and name of the config file that contains profile configuration. If you do not define this environment variable, the default location and filename is $HOME/.aws/config. If you are specifying explicit credential sources, or using external credential processes, you will want to use this file.

Note

Any named profiles that you create in your AWS credentials or config files appear as selectable options in the AWS Credential Profile drop-down in the Tanzu Kubernetes Grid Installer UI for AWS.

For more information about working AWS credentials and the default AWS credential provider chain, see Best practices for managing AWS access keys in the AWS documentation.

Cluster Configuration File

You can specify AWS credentials in the configuration file used to create a cluster, by setting the following variables:

  • AWS_ACCESS_KEY_ID (base 64 encoded)
  • AWS_SECRET_ACCESS_KEY (base 64 encoded)
  • AWS_SESSION_TOKEN (optional)

AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY must be passed in the form <encoded:(base64 encoded value)>, e.g. if your AWS access key ID is AKIAIOSFODNN7EXAMPLE, secret access key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY and session token AQoEXAMPLEH4aoAH0gNCAPyJxz4BlCFFxWNE1OP, then this must be set in the configuration file as:

AWS_ACCESS_KEY_ID: <encoded:QUtJQUlPU0ZPRE5ON0VYQU1QTEU=>
AWS_SECRET_ACCESS_KEY: <encoded:d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQ==>
AWS_SESSION_TOKEN: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Earlier versions of Tanzu Kubernetes Grid would save the credentials in a cluster configuration file. Tanzu Kubernetes Grid 1.4 and above do not save credentials for AWS in configuration files by default.

Register an EC2 Key Pair

After you have set your AWS account credentials using either local environment variables or in a credentials file and profile, you must generate an EC2 key pair for your AWS account. Tanzu Kubernetes Grid passes the public key part of this key pair to AWS in order to authenticate within each region.

Note

AWS supports only RSA keys. The keys required by AWS are of a different format to those required by vSphere. You cannot use the same key pair for both vSphere and AWS deployments.

If you do not already have an EC2 key pair for the account and region you are using to deploy the management cluster, create one by performing the steps below:

  1. For each region that you plan to use with Tanzu Kubernetes Grid, create a named key pair, and output a .pem file that includes the name. For example, the following command uses default and saves the file as default.pem.

    aws ec2 create-key-pair --key-name default --output json | jq .KeyMaterial -r > default.pem
    

    To create a key pair for a region that is not the default in your profile, or set locally as AWS_DEFAULT_REGION, include the --region option.

  2. Log in to your Amazon EC2 dashboard and go to Network & Security > Key Pairs to verify that the created key pair is registered with your account.

What to Do Next

For production deployments, it is strongly recommended to enable identity management for your clusters:

If you are using Tanzu Kubernetes Grid in an environment with an external internet connection, once you have set up identity management, you are ready to deploy management clusters to AWS.

check-circle-line exclamation-circle-line close-line
Scroll to top icon