This topic describes how to use the Tanzu Kubernetes Grid CLI to deploy a management cluster to Amazon Elastic Compute Cloud (Amazon EC2).

Prerequisites

  • Make sure that you have met the all of the requirements listed in Set Up Tanzu Kubernetes Grid and Prepare to Deploy the Management Cluster to Amazon EC2.
  • It is strongly recommended to use the Tanzu Kubernetes Grid installer interface rather than the CLI to deploy your first management cluster to Amazon EC2. When you deploy a management cluster by using the installer interface, it populates the config.yaml file for the management cluster with the required parameters. You can use the created config.yaml as a model for future deployments from the CLI.

    If this is the first time that you are running Tanzu Kubernetes Grid commands on this machine, and you have not already deployed a management cluster to Amazon EC2 by using the Tanzu Kubernetes Grid installer interface, open a terminal and run the tkg get management-cluster command.

    tkg get management-cluster
    

    Running a tkg command for the first time creates the $HOME/.tkg folder, that contains the management cluster configuration file config.yaml.

Procedure

IMPORTANT:

Do not run multiple management cluster deployments on the same bootstrap environment machine at the same time. Do not change context or edit the kubeconfig file while Tanzu Kubernetes Grid operations are running.

Tanzu Kubernetes Grid does not support IPv6 addresses. This is because upstream Kubernetes only provides alpha support for IPv6. Always provide IPv4 addresses in the procedures in this topic.

  1. Open the .tkg/config.yaml file in a text editor.

    If you have already deployed a management cluster to Amazon EC2 from the installer interface, you will see variables that describe your previous deployment.

    If you have not already deployed a management cluster to Amazon EC2 from the installer interface, copy and paste the following rows into the configuration file, after the end of the images section.

    AWS_REGION: 
    AWS_NODE_AZ:
    AWS_PUBLIC_NODE_CIDR:
    AWS_PRIVATE_NODE_CIDR:
    AWS_VPC_CIDR:
    CLUSTER_CIDR:
    AWS_SSH_KEY_NAME:
    CONTROL_PLANE_MACHINE_TYPE:
    NODE_MACHINE_TYPE:
    

    The table below describes all of the variables that you must set for deployment to Amazon EC2. Leave a space between the colon (:) and the variable value. For example:

    AWS_NODE_AZ: us-west-2a
    

    IMPORTANT: Any environment variables that you have set that have the same key as the variables that you set in config.yaml will override the values that you set in config.yaml. You must unset those variables before you deploy the management cluster from the CLI.

    Option Value Description
    AWS_REGION us-west, ap-northeast, etc. The name of the AWS region in which to deploy the management cluster. If you have already set a different region as an environment variable, for example in Prepare to Deploy the Management Cluster to Amazon EC2, you must unset that environment variable.
    AWS_NODE_AZ us-west-2a, ap-northeast-2b, etc. The name of the AWS availability zone in your chosen region, to use as the availability zone for nodes of this management cluster. Availability zone names are the same as the AWS region name, with a single lower-case letter suffix, such as a, b, c.
    AWS_PUBLIC_NODE_CIDR 10.0.1.0/24 If the recommended range of 10.0.0.0/16 is not available, enter a different IP range in CIDR format for public nodes to use.
    AWS_PRIVATE_NODE_CIDR 10.0.0.0/24 If the recommended range of 10.0.0.0/24 is not available, enter a different IP range in CIDR format for private nodes to use.
    AWS_VPC_CIDR 10.0.0.0/16 If the recommended range of 10.0.0.0/16 is not available, enter a different IP range in CIDR format for the management cluster to use.
    CLUSTER_CIDR 100.96.0.0/11 If the recommended range of 100.96.0.0/11 is not available, enter a different IP range in CIDR format for pods to use.
    AWS_SSH_KEY_NAME Your SSH key name Enter the name of the SSH private key that you registered with your Amazon EC2 account in Register an SSH Public Key with Your AWS Account.
    CONTROL_PLANE_MACHINE_TYPE t3.size Enter t3.small, t3.medium, t3.large, or t3.xlarge for the control plane node VMs, depending on the expected workloads that you will run in the cluster. For information about the configuration of the different sizes of T3 instances, see Amazon EC2 Instance Types.
    NODE_MACHINE_TYPE t3.size Enter t3.small, t3.medium, t3.large, or t3.xlarge for the worker node VMs, depending on the expected workloads that you will run in the cluster.

  2. Run the tkg init command.

    Running tkg init for the first time creates the $HOME/.tkg folder, that contains the template configuration file config.yaml from which the management cluster is deployed.

    • You must specify at least the --infrastructure=aws option.
      tkg init --infrastructure=aws
    • You can optionally specify a name for the management cluster in the --name option.
      tkg init --infrastructure=aws --name=management_cluster_name
    • To deploy a management cluster with a single control plane node, add the --plan=dev option. If you do not specify --plan, the dev plan is used by default.
      tkg init --infrastructure=aws --name management_cluster_name --plan=dev
    • To deploy a highly available management cluster with three control plane nodes, specify the --plan=prod option.
      tkg init --infrastructure=aws --name=management_cluster_name --plan=prod
    • By default Tanzu Kubernetes Grid creates $HOME/.tkg and creates the cluster configuration file, config.yaml in that folder. To create config.yaml in a different location or with a different name, specify the --config option. If you specify the --config option, Tanzu Kubernetes Grid only creates the YAML file in the specified location. Other files are still created in the $HOME/.tkg folder.
      tkg init --infrastructure=aws --name=management_cluster_name --config path_to_file/my-config.yaml
  3. Follow the progress of the deployment of the management cluster in the terminal.

    Deployment of the management cluster can take several minutes. The first run of tkg init takes longer than subsequent runs because it has to pull the required Docker images into the image store on your bootstrap environment. Subsequent runs do not require this step, so are faster.

What to Do Next

For information about what happened during the deployment of the management cluster, how to connect kubectl to the management cluster, and how to create namespaces, see Examine the Management Cluster Deployment.

check-circle-line exclamation-circle-line close-line
Scroll to top icon