This topic describes how to use the Tanzu Kubernetes Grid installer interface to deploy a management cluster to Amazon Elastic Compute Cloud (Amazon EC2). The Tanzu Kubernetes Grid installer interface guides you through the deployment of the management cluster, and provides different configurations for you to choose. If this is the first time that you are deploying a management cluster, it is recommended to use the installer interface.

Prerequisites

Procedure

The values that you set as environment variables in Deploy Management Clusters to Amazon EC2 are prepopulated in the relevant fields of the installer interface.

Warning: The tkg init command takes time to complete. While tkg init is running, do not run additional invocations of tkg init on the same bootstrap machine to deploy multiple management clusters, run tkg set management-cluster to change context, or edit ~/.kube-tkg/config.

  1. On the machine on which you downloaded and installed the Tanzu Kubernetes Grid CLI, run the tkg init command with the --ui option.

    tkg init --ui
    

    The installer interface launches in a browser and takes you through steps to configure the management cluster.

    • To make the installer interface appear locally if you are SSH-tunneling in to the bootstrap machine or X11-forwarding its display, you may need to run tkg init --ui with the –-browser none option described in Installer Interface Options below.

    The tkg init command uses and modifies settings in a cluster configuration file, which defaults to $HOME/.tkg/config.yaml. The command may overwrite values from previous invocations of tkg init unless you specify a file with a different name or location by using the --config option. For more information, see Management Clusters and config.yaml in the Manage Your Management Clusters topic.

    tkg init --ui --config /path/my-config.yaml
    

    By default Tanzu Kubernetes Grid saves the kubeconfig for all management clusters in the $HOME/.kube-tkg/config.yaml file. If you want to keep the kubeconfig file for a management cluster separate from the kubeconfig file for other management clusters, for example so that you can share it, specify the --kubeconfig command.

    tkg init --ui --kubeconfig /path/my-kubeconfig.yaml
    

    Specifying the --kubeconfig flag does not modify the location of the kubeconfig file of the bootstrap cluster created by kind. The default location of the kubeconfig file for the temporary cluster is a uniquely generated filename under $HOME/.kube-tkg/tmp/. If you want to use an existing bootstrap cluster to create a management cluster, see Use an Existing Boostrap Cluster.

    When you run the tkg init --ui command, it validates that your system meets the prerequisites:

    • NTP is running on the bootstrap machine on which you are running tkg init and on the hypervisor.
    • The CLI can connect to the location from which it pulls the required images.
    • Docker is running.

    By default, tkg init --ui opens the installer interface locally, at http://127.0.0.1:8080 in your default browser. The Installer Interface Options section below explains how you can change where the installer interface runs, including running it on a different machine from the tkg CLI.

    Tanzu Kubernetes Grid installer interface welcome page with Deploy to AWS button

  2. Click the Deploy button for AWS EC2.

  3. In the IaaS Provider section, enter the access key ID and secret access key for your Amazon EC2 account, and the name of an SSH key that is already registered with your Amazon EC2 account.
  4. Select the AWS region in which to deploy the management cluster. If you intend to deploy a production management cluster, this region must have at least three availability zones.
  5. If this is the first time that you are deploying a management cluster using Tanzu Kubernetes Grid v1.2, select the Automate creation of AWS CloudFormation Stack checkbox and click Connect.

    This CloudFormation stack provides the identity and access management (IAM) resources that Tanzu Kubernetes Grid needs to create management clusters and Tanzu Kubernetes clusters in Amazon EC2. The IAM resources are added to the control plane and node roles when they are created during cluster deployment.

    You need to create only one CloudFormation stack per AWS account. The IAM resources that the CloudFormation stack provides are global, meaning they are not specific to any region. For more information about CloudFormation stacks, see Working with Stacks in the AWS documentation.

    IMPORTANT: In Tanzu Kubernetes Grid v1.2 and later, the Automate creation of AWS CloudFormation Stack checkbox and the tkg config permissions aws command replace the clusterawsadm command line utility. For existing management and Tanzu Kubernetes clusters, initially deployed with v1.1.x or earlier, continue to use the CloudFormation stack that you created by running the clusterawsadm alpha bootstrap create-stack command. If you want to use the same AWS account for your existing clusters and Tanzu Kubernetes Grid v1.2 and later, both stacks must be present in the account. For more information, see Prepare to Upgrade Clusters on Amazon EC2.

    Configure the connection to AWS

  6. If the connection is successful, click Next.
  7. In the VPC for AWS section, do one of the following:

    • To create a new VPC, select Create new VPC on AWS, check that the pre-filled CIDR block is available, and click Next. If the recommended CIDR block is not available, enter a new IP range in CIDR format for the management cluster to use. The recommended CIDR block for VPC CIDR is 10.0.0.0/16.

      Create a new VPC

    • To use an existing VPC, select Select an existing VPC and select the VPC ID from the drop-down menu. The VPC CIDR block is filled in automatically when you select the VPC.

      Use and existing VPC

  8. In the Management Cluster Settings section, select the Development or Production tile.

    • If you select Development, the installer deploys a single control plane node.
    • If you select Production, the installer deploys three control plane nodes.
  9. In either of the Development or Production tiles, use the Instance type drop-down menu to select the configuration for the control plane node VM or VMs.

    Select the size of instance to use for the control plane node VMs, depending on the expected workloads that you will run in the cluster. The drop-down menu lists choices alphabetically, not by size. For information about the configuration of the different sizes of instances, see Amazon EC2 Instance Types.

    Select the control plane node configuration

  10. Optionally, enter a name for your management cluster.

    If you do not specify a name, Tanzu Kubernetes Grid generates one automatically. If you do specify a name, that name must be compliant with DNS hostname requirements as outlined in RFC 952 and amended in RFC 1123.

  11. Use the Worker Node Instance Type drop-down menu to select the VM instance type for the worker nodes for the management cluster.

    Select an instance size for the worker nodes depending on the expected CPU, memory, and storage consumption of the workloads that the cluster will run. The drop-down menu lists choices alphabetically, not by size.

  12. Optionally, disable the Bastion Host checkbox if a bastion host already exists in the availability zone(s) in which you are deploying the management cluster.

    If you leave this option enabled, Tanzu Kubernetes Grid creates a bastion host for you.

  13. Deselect the Machine Health Checks checkbox if you want to disable MachineHealthCheck.

    MachineHealthCheck provides node health monitoring and node auto-repair on the clusters that you deploy with this management cluster. You can enable or disable MachineHealthCheck on clusters after deployment by using the CLI. For instructions, see Configure Machine Health Checks for Tanzu Kubernetes Clusters.

  14. From the Availability Zone 1 drop-down menu, select an availability zone for the management cluster. You can select only one availability zone in the Development tile. See the image below.

    Configure the cluster

    If you selected the Production tile above, use the Availability Zone 1, Availability Zone 2, and Availability Zone 3 drop-down menus to select three unique availability zones for the management cluster. When Tanzu Kubernetes Grid deploys the management cluster, which includes three control plane nodes, it distributes the control plane nodes across these availability zones.

  15. To complete the configuration of the Management Cluster Settings section, do one of the following:

    • If you created a new VPC in the VPC for AWS section, click Next.
    • If you selected an existing VPC in the VPC for AWS section, use the VPC public subnet and VPC private subnet drop-down menus to select existing subnets on the VPC and click Next. The image below shows the Development tile.

    Set the VPC subnets

  16. In the Metadata section, optionally provide descriptive information about this management cluster.

    Any metadata that you specify here applies to the management cluster and to the Tanzu Kubernetes clusters that it manages, and can be accessed by using the cluster management tool of your choice.

    • Location: The geographical location in which the clusters run.
    • Description: A description of this management cluster. The description has a maximum length of 63 characters and must start and end with a letter. It can contain only lower case letters, numbers, and hyphens, with no spaces.
    • Labels: Key/value pairs to help users identify clusters, for example release : beta, environment : staging, or environment : production. For more information, see Labels and Selectors in the Kubernetes documentation.
      You can click Add to apply multiple labels to the clusters.

    Add cluster metadata

  17. In the Kubernetes Network section, review the Cluster Service CIDR and Cluster Pod CIDR ranges. If the recommended CIDR ranges of 100.64.0.0/13 and 100.96.0.0/11 are unavailable, update the values under Cluster Service CIDR and Cluster Pod CIDR.

    Set the Kubernetes network

  18. In the CEIP Participation section, optionally deselect the check box to opt out of the VMware Customer Experience Improvement Program.

    You can also opt in or out of the program after the deployment of the management cluster. For information about the CEIP, see Opt in or Out of the VMware CEIP and https://www.vmware.com/solutions/trustvmware/ceip.html.

  19. Click Review Configuration to see the details of the management cluster that you have configured.

    When you click Review Configuration, Tanzu Kubernetes Grid populates the cluster configuration file, .tkg/config.yaml by default, with the settings that you specified in the interface. You can optionally copy the cluster configuration file without completing the deployment. You can copy the cluster configuration file to another bootstrap machine and deploy the management cluster from that machine. For example, you might do this so that you can deploy the management cluster from a bootstrap machine that does not have a Web browser. In earlier versions of Tanzu Kubernetes Grid, the .tkg/config.yaml file is populated when you deploy the management cluster.

    Review the management cluster configuration

  20. (Optional) Under CLI Command Equivalent, click the Copy button to copy the CLI command for the configuration that you specified.

    Copy CLI command

    Copying the CLI command allows you to reuse the command at the command line to deploy management clusters with the configuration that you specified in the interface. This can be useful if you want to automate management cluster deployment.

  21. (Optional) Click Edit Configuration to return to the installer wizard to modify your configuration.
  22. Click Deploy Management Cluster and follow the progress of the deployment of the management cluster in the installer interface.

    Deployment of the management cluster can take several minutes. The first run of tkg init takes longer than subsequent runs because it has to pull the required Docker images into the image store on your bootstrap machine. Subsequent runs do not require this step, so are faster. You can follow the progress of the deployment of the management cluster in the installer interface or in the terminal in which you ran tkg init --ui. If the machine on which you run tkg init shuts down or restarts before the local operations finish, the deployment will fail. If you inadvertently close the browser or browser tab in which the deployment is running before it finishes, the deployment continues in the terminal.

    Monitor the management cluster deployment

Installer Interface Options

By default, tkg init --ui opens the installer interface locally, at http://127.0.0.1:8080 in your default browser. You can use the --browser and --bind options to control where the installer interface runs:

  • --browser specifies the local browser to open the interface in.
    • Supported values are chrome, firefox, safari, ie, edge, or none.
    • Use none with --bind to run the interface on a different machine, as described below.
  • --bind specifies the IP address and port to serve the interface from.

Warning: Serving the installer interface from a non-default IP address and port could expose the tkg CLI to a potential security risk while the interface is running. VMware recommends passing in to the --bind option an IP and port on a secure network.

Use cases for --browser and --bind include:

  • If another process is already using http://127.0.0.1:8080, use --bind to serve the interface from a different local port.
  • To make the installer interface appear locally if you are SSH-tunneling in to the bootstrap machine or X11-forwarding its display, you may need to use –-browser none.
  • To run the tkg CLI and create management clusters on a remote machine, and run the installer interface locally or elsewhere:
    1. On the remote, bootstrap machine, run tkg init --ui with the following options and values:
      • --bind: an IP address and port for the remote machine
      • --browser: none
      tkg init --ui --bind 192.168.1.87:5555 --browser none
      
    2. On the local UI machine, browse to the remote machine's IP address to access the installer interface.

What to Do Next

  • For information about what happened during the deployment of the management cluster, how to connect kubectl to the management cluster, and how to create namespaces see Examine the Management Cluster Deployment.
  • For information about how to create namespaces in the management cluster, see Create Namespaces in the Management Cluster.
  • If you need to deploy more than one management cluster, on any or all of vSphere, Azure, and Amazon EC2, see Manage Your Management Clusters. This topic also provides information about how to add existing management clusters to your CLI instance, obtain credentials, scale and delete management clusters, and how to opt in or out of the CEIP.
check-circle-line exclamation-circle-line close-line
Scroll to top icon