This topic describes how to use the Tanzu Kubernetes Grid installer interface to deploy a management cluster to a vSphere instance. The Tanzu Kubernetes Grid installer interface guides you through the deployment of the management cluster, and provides different configurations for you to select or reconfigure.
Make sure that you have met the all of the requirements listed in Download and Install the Tanzu Kubernetes Grid CLI and Prepare to Deploy Management Clusters to vSphere. If you are deploying clusters in an internet-restricted environment, you must also perform the steps in see Deploying Management Clusters to vSphere in an Internet-Restricted Environment.
Do not run multiple management cluster deployments on the same bootstrap environment machine at the same time. Do not change context or edit the
.kube-tkg/config file while Tanzu Kubernetes Grid operations are running.
Tanzu Kubernetes Grid does not support IPv6 addresses. This is because upstream Kubernetes only provides alpha support for IPv6. Always provide IPv4 addresses in the procedures in this topic.
The images in this topic reflect the installer interface in Tanzu Kubernetes Grid 1.1.2 and later.
On the machine on which you downloaded and installed the Tanzu Kubernetes Grid CLI, run the
tkg init command with the
tkg init --ui
By default Tanzu Kubernetes Grid creates a folder called
$HOME/.tkg and creates the cluster configuration file,
config.yaml in that folder. To create
config.yaml in a different location or with a different name, specify the
--config option. It might be useful to do this if you want to use different management clusters to deploy Tanzu Kubernetes clusters with different configurations. If you specify the
--config option, Tanzu Kubernetes Grid only creates the YAML file in the specified location. Other files are still created in the
tkg init --ui --config /path/my-config.yaml
By default Tanzu Kubernetes Grid saves the
kubeconfig for all management clusters in the
$HOME/.kube-tkg/config.yaml file. If you want to keep the
kubeconfig file for a management cluster separate from the
kubeconfig file for other management clusters, for example so that you can share it, specify the
tkg init --ui --kubeconfig /path/my-kubeconfig.yaml
When you run the
tkg init --ui command, it validates that your system meets the prerequisites:
tkg initand on the hypervisor.
If the prerequisites are met,
tkg init opens http://127.0.0.1:8080 in your default browser to display the Tanzu Kubernetes Grid installer interface.
Click the Deploy button for vSphere.
Enter the vCenter Single Sign On username and password for a user account that has the required privileges for Tanzu Kubernetes Grid operation, and click Connect.
Select the datacenter in which to deploy the management cluster from the Datacenter drop-down menu.
Paste the contents of your SSH public key into the text box and click Next.
In the Management Cluster Settings section, select the Development or Production tile.
In either of the Development or Production tiles, use the Instance type drop-down menu to select from different combinations of CPU, RAM, and storage for the control plane node VM or VMs.
Choose the configuration for the control plane node VMs depending on the expected CPU, memory, and storage consumption of the workloads that it will run. For example, some workloads might require a large compute capacity but relatively little storage, while others might require a large amount of storage and less compute capacity. If you select an instance type in the Production tile, the instance type that you selected is automatically selected for the Worker Node Instance Type and Load Balancer Instance Type. If necessary, you can change these.
Optionally enter a name for your management cluster.
If you do not specify a name, Tanzu Kubernetes Grid automatically generates a unique name. If you do specify a name, that name must be compliant with DNS hostname requirements as outlined in RFC 952 and amended in RFC 1123.
Use the API Server Load Balancer drop-down menu to select the VM template for the API Server Load Balancer.
The drop-down menu includes VM templates that are present in your vSphere instance that meet the criteria for use as API Server Load Balancer VMs. If you have not already imported a suitable VM template to vSphere, you can do so now without quitting the installer, and then use the Refresh button to make it available in the drop-down menu.
If your vSphere inventory includes API Server Load Balancer VM templates for both Tanzu Kubernetes Grid v1.0.0 and v1.1.x, both templates are available for selection. Select the template for Tanzu Kubernetes Grid 1.1.x,
Use the Worker Node Instance Type and Load Balancer Instance Type drop-down menus to select VM instance types for the worker nodes and load balancer VMs for the management cluster, and click Next.
Select the configuration for the worker node and load balancer VMs depending on the expected CPU, memory, and storage consumption of the workloads that the cluster will run.
In the Resources section, select vSphere resources for the management cluster to use, and click Next.
If appropriate resources do not already exist in vSphere, without quitting the Tanzu Kubernetes Grid installer, go to vSphere to create them. Then click the refresh button so that the new resources can be selected.
In the Kubernetes Network section, configure the networking for Kubernetes services, and click Next.
In the OS Image section, use the drop-down menu to select the OS image template to use for deploying Tanzu Kubernetes Grid VMs, and click Next.
The drop-down menu includes all of the OS image templates that are present in your vSphere instance that meet the criteria for use as Tanzu Kubernetes Grid base OS images. The OS image template must include the correct version of Kubernetes for this release of Tanzu Kubernetes Grid. If you have not already imported a suitable OS image template to vSphere, you can do so now without quitting the Tanzu Kubernetes Grid installer. After you import it, use the Refresh button to make it available in the drop-down menu.
Click Review Configuration to see the details of the management cluster that you have configured.
In Tanzu Kubernetes Grid 1.1.2 and later, when you click Review Configuration, Tanzu Kubernetes Grid populates the
.tkg/config.yaml file with the settings that you specified in the interface. You can optionally copy the
.tkg/config.yaml file without completing the deployment. You can copy
.tkg/config.yaml to another bootstrap environment machine and deploy the management cluster from that machine. For example, you might do this so that you can deploy the management cluster from a bootstrap environment machine that does not have a Web browser. In earlier versions of Tanzu Kubernetes Grid, the
.tkg/config.yaml file is populated when you deploy the management cluster.
(Optional) Under CLI Command Equivalent, click the Copy button to copy the CLI command for the configuration that you specified.
Copying the CLI command allows you to reuse the command at the command line to deploy management clusters with the configuration that you specified in the interface. This can be useful if you want to automate management cluster deployment. This option is available in Tanzu Kubernetes Grid 1.1.2 and later.
(Optional) Click Edit Configuration to return to the installer wizard to modify your configuration.
Click Deploy Management Cluster.
Deployment of the management cluster can take several minutes. The first run of
tkg init takes longer than subsequent runs because it has to pull the required Docker images into the image store on your bootstrap environment. Subsequent runs do not require this step, so are faster. You can follow the progress of the deployment of the management cluster in the installer interface or in the terminal in which you ran
tkg init --ui. If you inadvertently close the browser or browser tab in which the deployment is running before it finishes, the deployment continues in the terminal.
kubectlto the management cluster, and how to create namespaces see Examine the Management Cluster Deployment.