check-circle-line exclamation-circle-line close-line

<

Before you can use the Tanzu Kubernetes Grid CLI or installer interface to deploy a management cluster, you must prepare the machine on which you run the Tanzu Kubernetes Grid CLI, and set up your Amazon EC2 account.

General Requirements

  • Perform the steps described in Set Up Tanzu Kubernetes Grid.
  • You have the access key and access key secret for an active Amazon Web Services account.
  • Your AWS account must have Administrator privileges.
  • Your AWS account has a sufficient quota of Virtual Private Cloud (VPC) instances. By default, each management cluster that you deploy creates one VPC and one NAT gateway. The default NAT gateway quota is 5 instances per availability zone, per account. For more information, see Amazon VPC Quotas in the AWS documentation and Resource Usage in Your Amazon Web Services Account below.
  • Install the AWS CLI.
  • Install jq.

    The AWS CLI uses jq to process JSON when creating SSH key pairs. It is also used to prepare the environment or configuration variables when you deploy Tanzu Kubernetes Grid by using the CLI.

Resource Usage in Your Amazon Web Services Account

For each cluster that you create, Tanzu Kubernetes Grid provisions a set of resources in your Amazon Web Services account.

For development clusters that are not configured for high availability, Tanzu Kubernetes Grid provisions the following resources:

  • 3 VMs, including a control plane node, a worker node (to run the cluster agent extensions) and, by default, a bastion host. If you specify additional VMs in your node pool, those are provisioned as well.
  • 4 security groups, one for the load balancer and one for each of the initial VMs.
  • 1 private subnet and 1 public subnet in the specified availability zone.
  • 1 public and 1 private route table in the specified availability zone.
  • 1 classic load balancer.
  • 1 internet gateway.
  • 1 NAT gateway in the specified availability zone.
  • By default, 2 VPC Elastic IPs, one for the NAT gateway, and one for the Elastic Load Balancer. You can optionally use existing VPCs, rather than creating new ones.

For production clusters that are configured for high availability, Tanzu Kubernetes Grid provisions the resources listed above and the following additional resources to support replication in two additional availability zones:

  • 2 additional control plane VMs
  • 2 additional private and public subnets
  • 2 additional private and public route tables
  • 2 additional NAT gateways
  • By default, 2 additional VPC elastic IPs. You can optionally use existing VPCs, rather than creating new ones.

Amazon Web Services implements a set of default limits or quotas on these types of resources, and allows you to modify the limits. Typically the default limits are sufficient to get started creating clusters from Tanzu Kubernetes Grid. However, as you increase the number of clusters you are running or the workloads on your clusters, you will encroach on these limits. When you reach the limits imposed by Amazon Web Services, any attempts to provision that type of resource fail. As a result, Tanzu Kubernetes Grid will be unable to create a new cluster, or you might be unable to create additional deployments on your existing clusters. Therefore, regularly assess the limits you have specified in Amazon Web Services account, and adjust them as necessary to fit your business needs.

Virtual Private Clouds and NAT Gateway Limits

If you create a new Virtual Private Cloud (VPC) when you deploy a management cluster, Tanzu Kubernetes Grid also creates a dedicated NAT gateway for the management cluster. In this case, by default, Tanzu Kubernetes Grid creates a new VPC and NAT gateway for each Tanzu Kubernetes cluster that you deploy from that management cluster. Amazon EC2 only allows 5 NAT gateways per availability zone per account. Consequently, if you always create a new VPC for each cluster, you can only create 5 clusters in a single availability zone. If you already have five NAT gateways in use, Tanzu Kubernetes Grid is unable to provision the necessary resources when you attempt to create a new cluster. To create more than 5 clusters in a given availability zone, you must share existing VPCs, and therefore their NAT gateways, between multiple clusters.

There are 3 possible scenarios regarding VPCs and NAT gateway usage when you deploy management clusters and Tanzu Kubernetes clusters.

  • Create a new VPC and NAT gateway for every management cluster and Tanzu Kubernetes cluster

    If you deploy a management cluster and use the option to create a new VPC, and if you make no modifications to the configuration when you deploy Tanzu Kubernetes clusters from that management cluster, the deployment of each of the Tanzu Kubernetes clusters will also create a VPC and a NAT gateway. In this scenario, you can deploy one management cluster and up to 4 Tanzu Kubernetes clusters, due to the limit of 5 NAT gateways per availability zone.

  • Reuse a VPC and NAT gateway that already exist in your availability zone

    If a VPC already exists in the availability zone in which you are deploying a management cluster, for example a VPC that you created manually or by using tools such as CloudFormation or Terraform, you can specify that the management cluster should use this VPC. In this case, all of the Tanzu Kubernetes clusters that you deploy from that management cluster will also use the specified VPC and its NAT gateway. Because all of the Tanzu Kubernetes clusters share a single NAT gateway, the limit of 5 NAT gateways per availability zone is not exceeded and you can deploy more than 5 clusters in the availability zone.

    An existing VPC must be configured with the following networking:

    • Two subnets
    • One NAT gateway
    • One internet gateway and corresponding routing tables.
  • Create a new VPC and NAT gateway for the management cluster and deploy Tanzu Kubernetes clusters that share that VPC and NAT gateway

    If you are starting with an empty availability zone, you can deploy a management cluster and use the option to create a new VPC. If you want the Tanzu Kubernetes clusters to share a VPC that Tanzu Kubernetes Grid created, must modify the cluster configuration when you deploy Tanzu Kubernetes clusters from this management cluster.

For information about how to deploy management clusters that either create or reuse a VPC, see Deploy Management Clusters to Amazon EC2 with the Installer Interface and Deploy Management Clusters to Amazon EC2 with the CLI.

For information about how to deploy Tanzu Kubernetes clusters that share a VPC that Tanzu Kubernetes Grid created when you deployed the management cluster, see Deploy a Cluster that Shares a VPC with the Management Cluster.

Install the clusterawsadm Utility and Set Up a CloudFormation Stack

Tanzu Kubernetes Grid uses Cluster API Provider AWS to deploy clusters to Amazon EC2. Cluster API Provider AWS requires the clusterawsadm command line utility to be present on your system.

The clusterawsadm command line utility assists with identity and access management (IAM) for Cluster API Provider AWS. Tanzu Kubernetes Grid uses Cluster API Provider AWS to deploy clusters to Amazon EC2.

The clusterawsadm utility takes the credentials that you set as environment variables and uses them to create a CloudFormation stack in your AWS account with the correct IAM resources. Tanzu Kubernetes Grid uses the resources of the CloudFormation stack to create management and Tanzu Kubernetes clusters. The IAM resources are added to the control plane and node roles when they are created during cluster deployment. For more information about CloudFormation stacks, see Working with Stacks in the AWS documentation.

NOTE: This procedure assumes that you are installing Tanzu Kubernetes Grid 1.1.3. In version 1.1.3, the version of clusterawsadm is v0.5.4-vmware.2. In 1.1.2 it is v0.5.4-vmware.1 and in 1.1.0 it is v0.5.3-vmware.1.

  1. Create the following environment variables for your AWS account.

    • Your AWS access key:

      export AWS_ACCESS_KEY_ID=aws_access_key

    • Your AWS access key secret:

      export AWS_SECRET_ACCESS_KEY=aws_access_key_secret

    • If you use multi-factor authentication, your AWS session token.

      export AWS_SESSION_TOKEN=aws_session_token

    • The AWS region in which to deploy the cluster.

      For example, set the region to us-west-2.

      export AWS_REGION=us-west-2

      For the full list of AWS regions, see AWS Service Endpoints. In Tanzu Kubernetes Grid 1.1.2 and later, in addition to the regular AWS regions, you can also specify the us-gov-east and us-gov-west regions in AWS GovCloud.

  2. Go to https://www.vmware.com/go/get-tkg and log in with your My VMware credentials.
  3. Under Product Downloads, click Go to Downloads.
  4. Scroll to the clusterawsadm Account Preparation Tool entries and click the Download Now button for the executable for your platform.

    • Linux: ClusterAdmin AWS v0.5.4 Linux
    • Mac OS: ClusterAdmin AWS v0.5.4 Mac
    • Windows: ClusterAdmin AWS v0.5.4 Windows
  5. Use either the gunzip command or the extraction tool of your choice to unpack the binary that corresponds to the OS of your bootstrap environment:

    gunzip clusterawsadm-darwin-amd64-v0.5.4-vmware.2.gz
    gunzip clusterawsadm-linux-amd64-v0.5.4-vmware.2.gz
    gunzip clusterawsadm-windows-amd64-v0.5.4-vmware.2.gz

    The resulting files are clusterawsadm-darwin-amd64-v0.5.4-vmware.2, clusterawsadm-linux-amd64-v0.5.4-vmware.2, or clusterawsadm-windows-amd64-v0.5.4-vmware.2.

  6. Rename the binary for your platform to clusterawsadm, make sure that it is executable, and add it to your Path.

    • Mac OS and Linux platforms:

      1. Move the binary into the /usr/local/bin folder and rename it to clusterawsadm.
        • Linux:
          mv ./clusterawsadm-linux-amd64-v0.5.4-vmware.2 /usr/local/bin/clusterawsadm
        • Mac OS:
          mv ./gunzip clusterawsadm-darwin-amd64-v0.5.4-vmware.2 /usr/local/bin/clusterawsadm
      2. Make the file executable.
        chmod +x /usr/local/bin/clusterawsadm
    • Windows platforms:

      1. Create a new Program Files\clusterawsadm folder and copy the clusterawsadm-windows-amd64-v0.5.4-vmware.2 binary into it.
      2. Rename clusterawsadm-windows-amd64-v0.5.4-vmware.2 to clusterawsadm.exe.
      3. Right-click the clusterawsadm folder, select Properties > Security, and make sure that your user account has the Full Control permission.
      4. Use Windows Search to search for env.
      5. Select Edit the system environment variables and click the Environment Variables button.
      6. Select the Path row under System variables, and click Edit.
      7. Click New to add a new row and enter the path to the clusterawsadm binary.
  7. Run the following clusterawsadm command to create a CloudFoundation stack.
    clusterawsadm alpha bootstrap create-stack

You only need to run clusterawsadm once per account. The CloudFormation stack that is created is not specific to any region.

Register an SSH Public Key with Your AWS Account

In order for Tanzu Kubernetes Grid VMs to launch on Amazon EC2, you must provide the public key part of an SSH key pair to Amazon EC2 for every region you would like to deploy a management cluster.

NOTE: AWS only supports RSA keys. The keys required by AWS are of a different format to those required by vSphere. You cannot use the same key pair for both vSphere and AWS deployments.

If you do not already have an SSH key pair, you can use the AWS CLI to create one, by performing the steps below.

  1. Create a key pair named default and save it as default.pem.

    aws ec2 create-key-pair --key-name default --output json | jq .KeyMaterial -r > default.pem
    
  2. Log in to your Amazon EC2 dashboard, and go to Network & Security > Key Pairs to verify that the created key pair is registered with your account.

Set Your AWS Credentials as Environment Variables for Use by Cluster API

After you have created the CloudFoundation stack, you must set your AWS credentials as environment variables. Cluster API Provider AWS needs these variables so that it can write the credentials into cluster manifests when it creates clusters. You must perform the steps in Install the clusterawsadm Utility and Set Up a CloudFormation Stack before you perform these steps.

  1. Set a new environment variable for your AWS credentials.
    export AWS_CREDENTIALS=$(aws iam create-access-key --user-name bootstrapper.cluster-api-provider-aws.sigs.k8s.io --output json)
  2. Replace the environment variable that you created for your AWS access key ID.
    export AWS_ACCESS_KEY_ID=$(echo $AWS_CREDENTIALS | jq .AccessKey.AccessKeyId -r)
  3. Replace the environment variable that you created for your secret access key.
    export AWS_SECRET_ACCESS_KEY=$(echo $AWS_CREDENTIALS | jq .AccessKey.SecretAccessKey -r)
  4. Set a new environment variable to encode your AWS credentials.
    export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm alpha bootstrap encode-aws-credentials)

What to Do Next

Your environment is now ready for you to deploy the Tanzu Kubernetes Grid management cluster to Amazon EC2.