This topic describes how to use the Tanzu Kubernetes Grid CLI to deploy a management cluster to vSphere from a YAML configuration file.

Prerequisites

  • Ensure that you have met the all of the requirements listed in Install the Tanzu Kubernetes Grid CLI and Deploy Management Clusters to vSphere. If you are deploying clusters in an internet-restricted environment, you must also perform the steps in see Deploy Tanzu Kubernetes Grid to an Offline Environment.
  • It is strongly recommended to use the Tanzu Kubernetes Grid installer interface rather than the CLI to deploy your first management cluster to vSphere. When you deploy a management cluster by using the installer interface, it populates the config.yaml file for the management cluster with the required parameters. You can use the created config.yaml as a model for future deployments from the CLI.

Configure the config.yaml File

The config.yaml file provides the base configuration for management clusters and Tanzu Kubernetes clusters. When you deploy a management cluster from the CLI, the tkg init command uses this configuration.

To configure the config.yaml file, follow the steps below:

  1. If this is the first time that you are running Tanzu Kubernetes Grid commands on this machine, and you have not already deployed a management cluster to vSphere by using the Tanzu Kubernetes Grid installer interface, open a terminal and run the tkg get management-cluster command:

    tkg get management-cluster
    

    Running a tkg command for the first time creates the $HOME/.tkg folder, that contains the management cluster configuration file config.yaml and other configuration files.

  2. Open the .tkg/config.yaml file in a text editor.

    • If you have already deployed a management cluster to vSphere from the installer interface, you will see variables that describe your previous deployment.

    • If you have not already deployed a management cluster to vSphere from the installer interface, copy and paste the following rows at the end of configuration file.

      VSPHERE_SERVER:
      VSPHERE_USERNAME:
      VSPHERE_PASSWORD:
      VSPHERE_DATACENTER:
      VSPHERE_DATASTORE:
      VSPHERE_NETWORK:
      VSPHERE_RESOURCE_POOL:
      VSPHERE_FOLDER:
      VSPHERE_SSH_AUTHORIZED_KEY:
      SERVICE_CIDR:
      CLUSTER_CIDR:
      VSPHERE_WORKER_DISK_GIB:
      VSPHERE_WORKER_NUM_CPUS:
      VSPHERE_WORKER_MEM_MIB:
      VSPHERE_CONTROL_PLANE_DISK_GIB:
      VSPHERE_CONTROL_PLANE_NUM_CPUS:
      VSPHERE_CONTROL_PLANE_MEM_MIB:
      
  3. Edit the configuration file to update the information about the target vSphere environment and the configuration of the management cluster to deploy.

    The table in Configuration Parameter Reference describes all of the configuration options that you can provide for deployment of a management cluster to vSphere. Line order in the configuration file does not matter.

    The following example shows the configuration for a management cluster that corresponds to one that you would deploy from the installer interface by selecting the options for small control plane and worker nodes.

    VSPHERE_SERVER: <vcenter_server_address>
    VSPHERE_USERNAME: tkg-user@vsphere.local
    VSPHERE_PASSWORD: <vcenter_sso_password>
    VSPHERE_DATACENTER: /MY-DATACENTER
    VSPHERE_DATASTORE: /MY-DATACENTER/datastore/MyDatastore
    VSPHERE_NETWORK: VM Network
    VSPHERE_RESOURCE_POOL: /MY-DATACENTER/host/MY-CLUSTER/Resources
    VSPHERE_FOLDER: /MY-DATACENTER/vm/TKG-FOLDER
    VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa 
    NzaC1yc2EA [...] hnng2OYYSl+8ZyNz3fmRGX8uPYqw==
        email@example.com
    SERVICE_CIDR: 100.64.0.0/13
    CLUSTER_CIDR: 100.96.0.0/11
    VSPHERE_WORKER_DISK_GIB: "20"
    VSPHERE_WORKER_NUM_CPUS: "2"
    VSPHERE_WORKER_MEM_MIB: "2048" 
    VSPHERE_CONTROL_PLANE_DISK_GIB: "20"
    VSPHERE_CONTROL_PLANE_NUM_CPUS: "2"
    VSPHERE_CONTROL_PLANE_MEM_MIB: "2048"
    

Configuration Parameter Reference

The table below describes all of the variables that you must set for deployment to vSphere, along with some vSphere-specific optional variables. To set them in a configuration file, leave a space between the colon (:) and the variable value. For example:

VSPHERE_USERNAME: tkg-user@vsphere.local

IMPORTANT:

  • As described in Configuring the Management Cluster, environment variables override values from a configuration file. To use all settings from a config.yaml, unset any conflicting environment variables before you deploy the management cluster from the CLI.
  • Tanzu Kubernetes Grid does not support IPv6 addresses. This is because upstream Kubernetes only provides alpha support for IPv6. Always provide IPv4 addresses in the procedures in this topic.

Option Value Description
VSPHERE_SERVER: vCenter_Server_address The IP address or FQDN of the vCenter Server instance on which to deploy the management cluster.
VSPHERE_USERNAME: tkg-user@vsphere.local A vSphere user account with the required privileges for Tanzu Kubernetes Grid operation.
VSPHERE_PASSWORD: My_P@ssword! The password for the vSphere user account. This value is base64-encoded when you run tkg init.
VSPHERE_DATACENTER: datacenter_name The name of the datacenter in which to deploy the management cluster, as it appears in the vSphere inventory. For example, /MY-DATACENTER.
VSPHERE_DATASTORE: datastore_name The name of the vSphere datastore for the management cluster to use, as it appears in the vSphere inventory. For example, /MY-DATACENTER/datastore/MyDatastore.
VSPHERE_NETWORK: VM Network The name of an existing vSphere network to use as the Kubernetes service network, as it appears in the vSphere inventory. For example, VM Network.
VSPHERE_RESOURCE_POOL: resource_pool_name The name of an existing resource pool in which to place this Tanzu Kubernetes Grid instance, as it appears in the vSphere inventory. To use the root resource pool for a cluster, enter the full path, for example for a cluster named cluster0 in datacenter MY-DATACENTER, the full path is /MY-DATACENTER/host/cluster0/Resources.
VSPHERE_FOLDER: VM_folder_name The name of an existing VM folder in which to place Tanzu Kubernetes Grid VMs, as it appears in the vSphere inventory. For example, if you created a folder named TKG, the path is /MY-DATACENTER/vm/TKG.
VSPHERE_WORKER_DISK_GIB: "50" The size in gigabytes of the disk for the worker node VMs. Include the quotes (""). You can also specify or override this value by using the tkg init --size or --worker-size options.
VSPHERE_WORKER_NUM_CPUS: "2" The number of CPUs for the worker node VMs. Include the quotes (""). Must be at least 2. You can also specify or override this value by using the tkg init --size or --worker-size options.
VSPHERE_WORKER_MEM_MIB: "4096" The amount of memory in megabytes for the worker node VMs. Include the quotes (""). You can also specify or override this value by using the tkg init --size or --worker-size options.
VSPHERE_CONTROL_PLANE_DISK_GIB: "30" The size in gigabytes of the disk for the control plane node VMs. Include the quotes (""). You can also specify or override this value by using the tkg init --size or --controlplane-size options.
VSPHERE_CONTROL_PLANE_NUM_CPUS: "2" The number of CPUs for the control plane node VMs. Include the quotes (""). Must be at least 2. You can also specify or override this value by using the tkg init --size or --controlplane-size options.
VSPHERE_CONTROL_PLANE_MEM_MIB: "2048" The amount of memory in megabytes for the control plane node VMs. Include the quotes (""). You can also specify or override this value by using the tkg init --size or --controlplane-size options.
VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa NzaC1yc2EA [...] hnng2OYYSl+8ZyNz3fmRGX8uPYqw== email@example.com" Paste in the contents of the SSH public key that you created in Deploy a Management Cluster to vSphere.
SERVICE_CIDR: 100.64.0.0/13 The CIDR range to use for the Kubernetes services. The recommended range is 100.64.0.0/13. Change this value only if the recommended range is unavailable.
CLUSTER_CIDR: 100.96.0.0/11 The CIDR range to use for pods. The recommended range is 100.96.0.0/11. Change this value only if the recommended range is unavailable.
ENABLE_MHC: "true" or "false" Enables or disables the MachineHealthCheck controller, which provides node health monitoring and node auto-repair for Tanzu Kubernetes clusters. This option is enabled in the global Tanzu Kubernetes Grid configuration by default, for all Tanzu Kubernetes clusters. To disable MachineHealthCheck on the clusters that you deploy with this management cluster, set ENABLE_MHC to false. Set this variable only if you want to override your global configuration. You can enable or disable MachineHealthCheck on individual clusters after deployment by using the CLI. For instructions, see Configure Machine Health Checks for Tanzu Kubernetes Clusters.
MHC_UNKNOWN_STATUS_TIMEOUT: For example, 10m Property of MachineHealthCheck. By default, if the Ready condition of a node remains Unknown for longer than 5m, MachineHealthCheck considers the machine unhealthy and recreates it. Set this variable if you want to change the default timeout.
MHC_FALSE_STATUS_TIMEOUT: For example, 10m Property of MachineHealthCheck. By default, if the Ready condition of a node remains False for longer than 5m, MachineHealthCheck considers the machine unhealthy and recreates it. Set this variable if you want to change the default timeout.

Run the tkg init Command

After you have updated .tkg/config.yaml you deploy a management cluster by running the tkg init command. The config.yaml file provides the base configuration for management clusters and Tanzu Kubernetes clusters. You provide more precise configuration information for individual clusters by running the tkg init command with different options.

IMPORTANT: Do not run multiple management cluster deployments on the same bootstrap machine at the same time. Do not change context or edit the .kube-tkg/config file while Tanzu Kubernetes Grid operations are running.

To control how tkg init connects to Tanzu Kubernetes Grid Service configuration on vSphere 7, see the Tanzu Kubernetes Grid Service on vSphere 7 section below.

To deploy a management cluster to vSphere, you must at least specify the --infrastructure vsphere and --vsphere-controlplane-endpoint-ip options to tkgi init:

tkg init --infrastructure vsphere --vsphere-controlplane-endpoint-ip <ip_address>

The table in CLI Options Reference describes the required and additional command-line options for deploying a management cluster to vSphere.

Monitoring Progress

When you run tkg init, you can follow the progress of the deployment of the management cluster in the terminal. For more detail, open the log file listed in the terminal output Logs of the command execution can also be found at....

Deployment of the management cluster can take several minutes. The first run of tkg init takes longer than subsequent runs because it has to pull the required Docker images into the image store on your bootstrap machine. Subsequent runs do not require this step, so are faster.

The first run of tkg init also adds settings to your configuration file.

If tkg init fails before the management cluster deploys to vSphere, you should clean up artifacts on your bootstrap machine before you re-run tkg init. See the Troubleshooting Tips topic for details.

CLI Options Reference

The table below describes command-line options that you can set for deployment to vSphere.

For example, to create a highly available vSphere management cluster vsphere-management-cluster in which all of the control plane and worker node VMs are large size:

tkg init --infrastructure vsphere --vsphere-controlplane-endpoint-ip <ip_address> --name vsphere-management-cluster --plan prod --size large

Option Value Description
--infrastructure vsphere Required
--vsphere-controlplane-endpoint-ip <ip_address> Required. Static virtual IP address for API requests to the management cluster. For more information, see Load Balancers for vSphere.
--deploy-tkg-on-vSphere7 See Tanzu Kubernetes Grid Service on vSphere 7 below.
--enable-tkgs-on-vSphere7
--name Name for the management cluster Name must comply with DNS hostname requirements as outlined in RFC 952 and amended in RFC 1123
If you do not specify a name, a unique name is generated.
Scaling and Availability
--plan dev or prod dev ("development"), the default, deploys a management cluster with a single control plane node.
prod ("production") deploys a highly available management cluster with three control plane nodes.
Configuration Files
--config Local file system path to .yaml file, e.g. /path/to/file/my-config.yaml Configuration file to use or create, other than the default $HOME/.tkg/config.yaml. If the config file was not already created by hand or prior tkg init calls, it is created. All other files are created in the default folder.
This option, for example, lets you deploy multiple management clusters that share a VNET.
--kubeconfig Local file system path to .yaml file, e.g. /path/to/file/my-kubeconfig.yaml Kube configuration file or use, or create, other than the default $HOME/.kube-tkg/config.yaml.
This option lets you customize or share multiple kubeconfig files for multiple management clusters.
Nodes
--size Machine size, designated as follows:
  • small: CPUs: 2, Memory: 2048 MB, Disk: 20 GB
  • medium: CPUs: 2, Memory: 4096 MB, Disk: 40 GB
  • large: CPUs: 2, Memory: 8192 MB, Disk: 40 GB
  • extra-large: CPUs: 4, Memory: 16384 MB, Disk: 80 GB
Size for both control plane and worker node VMs.
--controlplane-size Size for control plane node VMs. Overrides the --size option and VSPHERE_CONTROL_PLANE_ parameters.
--worker-size Size for worker node VMs. Overrides the --size option VSPHERE_WORKER_ parameters.
Customer Experience Improvement Program
--ceip-participation true or false false opts out of the VMware Customer Experience Improvement Program. Default is true.
You can also opt in or out of the program after deploying the management cluster. For information, see Opt in or Out of the VMware CEIP and Customer Experience Improvement Program ("CEIP").

Tanzu Kubernetes Grid Service on vSphere 7

On vSphere 7, the Supervisor Cluster built into the vSphere with Tanzu option provides a better experience than a management cluster deployed by Tanzu Kubernetes Grid, and you can use the TKG CLI to connect to the Supervisor Cluster. For information, see Use the Tanzu Kubernetes Grid CLI with a vSphere with Tanzu Supervisor Cluster.

To reflect the recommendation for Tanzu Kubernetes Grid Service, the Tanzu Kubernetes Grid CLI behaves as follows, controlled by the --deploy-tkg-on-vSphere7 and --enable-tkgs-on-vSphere7 flags to tkg init.

  • vSphere with Tanzu enabled:
    • No --enable-tkgs-on-vSphere7: Informs you that deploying a management cluster is not possible, and exits.
    • With --enable-tkgs-on-vSphere7: Opens the vSphere Client at the address set by VSPHERE_SERVER in your config.yml or local environment, so you can configure your Supervisor Cluster as described in Enable the Workload Management Platform with the vSphere Networking Stack in the vSphere documentation.
  • vSphere with Tanzu not enabled:
    • No --deploy-tkg-on-vSphere7: Informs you that deploying a Tanzu Kubernetes Grid management cluster is possible but not recommended, and prompts you to either quit the installation or continue to deploy the management cluster.
    • With --deploy-tkg-on-vSphere7: Deploys a TKG management cluster on vSphere7, against recommendation.

What to Do Next

  • For information about what happened during the deployment of the management cluster, how to connect kubectl to the management cluster, and how to create namespaces see Examine the Management Cluster Deployment.
  • For information about how to create namespaces in the management cluster, see Create Namespaces in the Management Cluster.
  • If you need to deploy more than one management cluster, on any or all of vSphere, Azure, and Amazon EC2, see Manage Your Management Clusters. This topic also provides information about how to add existing management clusters to your CLI instance, obtain credentials, scale and delete management clusters, and how to opt in or out of the CEIP.
check-circle-line exclamation-circle-line close-line
Scroll to top icon