check-circle-line exclamation-circle-line close-line

<

This topic describes how to use the Tanzu Kubernetes Grid CLI to deploy a management cluster to vSphere from a YAML file.

Prerequisites

  • Make sure that you have met the all of the requirements listed in Download and Install the Tanzu Kubernetes Grid CLI and Prepare to Deploy Management Clusters to vSphere. If you are deploying clusters in an internet-restricted environment, you must also perform the steps in see Deploying Management Clusters to vSphere in an Internet-Restricted Environment.
  • It is strongly recommended to use the Tanzu Kubernetes Grid installer interface rather than the CLI to deploy your first management cluster to vSphere. When you deploy a management cluster by using the installer interface, it populates the config.yaml file for the management cluster with the required parameters. You can use the created config.yaml as a model for future deployments from the CLI.

    If this is the first time that you are running Tanzu Kubernetes Grid commands on this machine, and you have not already deployed a management cluster to vSphere by using the Tanzu Kubernetes Grid installer interface, open a terminal and run the tkg get management-cluster command.

    tkg get management-cluster
    

    Running a tkg command for the first time creates the $HOME/.tkg folder, that contains the management cluster configuration file config.yaml and other configuration files.

Procedure

IMPORTANT:

Do not run multiple management cluster deployments on the same bootstrap environment machine at the same time. Do not change context or edit the .kube-tkg/config file while Tanzu Kubernetes Grid operations are running.

Tanzu Kubernetes Grid does not support IPv6 addresses. This is because upstream Kubernetes only provides alpha support for IPv6. Always provide IPv4 addresses in the procedures in this topic.

  1. Open the .tkg/config.yaml file in a text editor.

    If you have already deployed a management cluster to vSphere from the installer interface, you will see variables that describe your previous deployment.

    If you have not already deployed a management cluster to vSphere from the installer interface, copy and paste the following rows into the configuration file, after the end of the images section.

    VSPHERE_SERVER: 
    VSPHERE_USERNAME:
    VSPHERE_PASSWORD: 
    VSPHERE_DATACENTER:
    VSPHERE_DATASTORE:
    VSPHERE_NETWORK:
    VSPHERE_RESOURCE_POOL: 
    VSPHERE_FOLDER:
    VSPHERE_TEMPLATE:
    VSPHERE_HAPROXY_TEMPLATE:
    VSPHERE_SSH_AUTHORIZED_KEY:
    SERVICE_CIDR:  
    CLUSTER_CIDR: 
    VSPHERE_WORKER_DISK_GIB:
    VSPHERE_WORKER_NUM_CPUS:
    VSPHERE_WORKER_MEM_MIB:
    VSPHERE_CONTROL_PLANE_DISK_GIB:
    VSPHERE_CONTROL_PLANE_NUM_CPUS:
    VSPHERE_CONTROL_PLANE_MEM_MIB:
    VSPHERE_HA_PROXY_DISK_GIB:
    VSPHERE_HA_PROXY_NUM_CPUS:
    VSPHERE_HA_PROXY_MEM_MIB:
    
  2. Edit the configuration file to update the information about the target vSphere environment and the configuration of the management cluster to deploy.

    The table below describes all of the configuration options that you must provide for deployment of the management cluster to vSphere. Leave a space between the colon (:) and the variable value. For example:

    VSPHERE_USERNAME: tkg-user@vsphere.local
    

    IMPORTANT: Any environment variables that you have set that have the same key as the variables that you set in config.yaml will override the values that you set in config.yaml. You must unset those environment variables before you deploy the management cluster from the CLI.

    Option Value Description
    VSPHERE_SERVER: vCenter_Server_address The IP address or FQDN of the vCenter Server instance on which to deploy the management cluster.
    VSPHERE_USERNAME: tkg-user@vsphere.local A vSphere user account with the required privileges for Tanzu Kubernetes Grid operation.
    VSPHERE_PASSWORD: My_P@ssword! The password for the vSphere user account. This value is base64-encoded when you run tkg init.
    VSPHERE_DATACENTER: datacenter_name The name of the datacenter in which to deploy the management cluster, as it appears in the vSphere inventory. For example, /MY-DC.
    VSPHERE_DATASTORE: datastore_name The name of the vSphere datastore for the management cluster to use, as it appears in the vSphere inventory. For example, /MY-DC/datastore/MyDatastore.
    VSPHERE_NETWORK: VM Network The name of an existing vSphere network to use as the Kubernetes service network, as it appears in the vSphere inventory. For example, VM Network.
    VSPHERE_RESOURCE_POOL: resource_pool_name The name of an existing resource pool in which to place this Tanzu Kubernetes Grid instance, as it appears in the vSphere inventory. To use the root resource pool for a cluster, enter the full path, for example for a cluster named cluster0 in datacenter MY-DC, the full path is /MY-DC/host/cluster0/Resources.
    VSPHERE_FOLDER: VM_folder_name The name of an existing VM folder in which to place Tanzu Kubernetes Grid VMs, as it appears in the vSphere inventory. For example, if you created a folder named TKG, the path is /MY-DC/vm/TKG.
    VSPHERE_TEMPLATE: photon-3-kube-v1.18.2-vmware.1.ova The VM template in the vSphere inventory from which to bootstrap management cluster VMs. In the 1.1.0 release, it is photon-3-kube-v1.18.2-vmware.1. For example, if you stored the template in a folder named TKG in datacenter MY-DC, the path is /MY-DC/vm/TKG/photon-3-kube-v1.18.2-vmware.1. In Tanzu Kubernetes Grid 1.1.2 and later, tkg init automatically detects the appropriate template, so this parameter is not required.
    VSPHERE_HAPROXY_TEMPLATE: photon-3-haproxy-v1.2.4-vmware.1 The VM template in the vSphere inventory from which to bootstrap API server load balancer VMs. In Tanzu Kubernetes Grid 1.1.x, it is photon-3-haproxy-v1.2.4-vmware.1. For example, if you stored the template in a folder named TKG in datacenter MY-DC, the path is /MY-DC/vm/TKG/photon-3-haproxy-v1.2.4-vmware.1.
    VSPHERE_WORKER_DISK_GIB: "50" The size in gigabytes of the disk for the worker node VMs. Include the quotes (""). In Tanzu Kubernetes Grid 1.1.2 and later, you can specify or override this value by using the tkg init --size or --worker-size options.
    VSPHERE_WORKER_NUM_CPUS: "2" The number of CPUs for the worker node VMs. Include the quotes (""). In Tanzu Kubernetes Grid 1.1.2 and later, you can specify or override this value by using the tkg init --size or --worker-size options.
    VSPHERE_WORKER_MEM_MIB: "4096" The amount of memory in megabytes for the worker node VMs. Include the quotes (""). In Tanzu Kubernetes Grid 1.1.2 and later, you can specify or override this value by using the tkg init --size or --worker-size options.
    VSPHERE_CONTROL_PLANE_DISK_GIB: "30" The size in gigabytes of the disk for the control plane node VMs. Include the quotes (""). In Tanzu Kubernetes Grid 1.1.2 and later, you can specify or override this value by using the tkg init --size or --controlplane-size options.
    VSPHERE_CONTROL_PLANE_NUM_CPUS: "1" The number of CPUs for the control plane node VMs. Include the quotes (""). In Tanzu Kubernetes Grid 1.1.2 and later, you can specify or override this value by using the tkg init --size or --controlplane-size options.
    VSPHERE_CONTROL_PLANE_MEM_MIB: "2048" The amount of memory in megabytes for the control plane node VMs. Include the quotes (""). In Tanzu Kubernetes Grid 1.1.2 and later, you can specify or override this value by using the tkg init --size or --controlplane-size options.
    VSPHERE_HA_PROXY_DISK_GIB: "30" The size in gigabytes of the disk for the HA proxy VM. Include the quotes (""). In Tanzu Kubernetes Grid 1.1.2 and later, you can specify or override this value by using the tkg init --size or --haproxy-size options.
    VSPHERE_HA_PROXY_NUM_CPUS: "1" The number of CPUs for the HA proxy VM. Include the quotes (""). In Tanzu Kubernetes Grid 1.1.2 and later, you can specify or override this value by using the tkg init --size or --haproxy-size options.
    VSPHERE_HA_PROXY_MEM_MIB: "2048" The amount of memory in megabytes for the HA proxy VM. Include the quotes (""). In Tanzu Kubernetes Grid 1.1.2 and later, you can specify or override this value by using the tkg init --size or --haproxy-size options.
    VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa AAAAB3NzaC1yc2EAA [...] lYImkx21vUu58cj" Paste in the contents of the SSH public key that you created in Deploy a Management Cluster to vSphere.
    SERVICE_CIDR: 100.64.0.0/13 The CIDR range to use for the Kubernetes services. The recommended range is 100.64.0.0/13. Change this value only if the recommended range is unavailable.
    CLUSTER_CIDR: 100.96.0.0/11 The CIDR range to use for pods. The recommended range is 100.96.0.0/11. Change this value only if the recommended range is unavailable.

  3. Save the configuration file.

  4. Run the tkg init command.

    • You must specify at least the --infrastructure vsphere option.

      tkg init --infrastructure vsphere
      
    • You can optionally specify a name for the management cluster in the --name option. If you do not specify a name, Tanzu Kubernetes Grid automatically generates a unique name for the cluster. If you do specify a name, that name must be compliant with DNS hostname requirements as outlined in RFC 952 and amended in RFC 1123.

      tkg init --infrastructure vsphere --name vsphere-management-cluster
      
    • To deploy a management cluster with a single control plane node, add the --plan dev option. If you do not specify --plan, the dev plan is used by default.

      tkg init --infrastructure vsphere --name vsphere-management-cluster --plan dev
      
    • To deploy a highly available management cluster with three control plane nodes, specify the --plan prod option.

      tkg init --infrastructure vsphere --name vsphere-management-cluster --plan prod
      
    • By default Tanzu Kubernetes Grid creates $HOME/.tkg and creates the cluster configuration file, config.yaml in that folder. To create config.yaml in a different location or with a different name, specify the --config option. It might be useful to do this if you want to use different management clusters to deploy Tanzu Kubernetes clusters with different configurations. If you specify the --config option, Tanzu Kubernetes Grid only creates the YAML file in the specified location. Other files are still created in the $HOME/.tkg folder.

      tkg init --infrastructure vsphere --name vsphere-management-cluster --config /path/to/file/my-config.yaml
      
    • By default Tanzu Kubernetes Grid saves the kubeconfig for all management clusters in the $HOME/.kube-tkg/config.yaml file. If you want to keep the kubeconfig file for a management cluster separate from the kubeconfig file for other management clusters, for example so that you can share it, specify the --kubeconfig command.

      tkg init --infrastructure vsphere --name vsphere-management-cluster --kubeconfig /path/to/file/my-kubeconfig.yaml
      
    • To create a management cluster in which all of the control plane, worker, and load balancer node VMs are the same size, specify the --size option with a value of extra-large, large, medium, or small.

      tkg init --infrastructure vsphere --name vsphere-management-cluster --size large
      

      This option is available in Tanzu Kubernetes Grid 1.1.2 and later. If you specify the --size option, any values that you specified for the VSPHERE_WORKER_*, VSPHERE_CONTROL_PLANE_*, and VSPHERE_HA_PROXY_* settings in config.yaml are overridden.

    • To create a management cluster in which the control plane, worker, and load balancer node VMs are different sizes, specify the --controlplane-size, --worker-size, and --haproxy-size options with values of extra-large, large, medium, or small.

      tkg init --infrastructure vsphere --name vsphere-management-cluster --controlplane-size medium --worker-size large --haproxy-size small
      

      These options are available in Tanzu Kubernetes Grid 1.1.2 and later. If you specify the --controlplane-size, --worker-size, and --haproxy-size options, any values that you specified for the VSPHERE_WORKER_*, VSPHERE_CONTROL_PLANE_*, and VSPHERE_HA_PROXY_* settings in config.yaml are overridden. You can combine these options with the --size option. For example, if you specify --size large with --worker-size extra-large, the control plane and HA proxy nodes will both be set to large and worker nodes will be set to extra-large.

  5. Follow the progress of the deployment of the management cluster in the terminal.

Deployment of the management cluster can take several minutes. The first run of tkg init takes longer than subsequent runs because it has to pull the required Docker images into the image store on your bootstrap environment. Subsequent runs do not require this step, so are faster.

NOTES:

  • If you connect to a vSphere 7.0 instance and the vSphere with Kubernetes feature is enabled, the CLI informs you that deploying a Tanzu Kubernetes Grid management cluster is not possible and exits. In this case, you can connect the TKG CLI to the vSphere with Kubernetes Supervisor Cluster. For information, see see Use the Tanzu Kubernetes Grid CLI with a vSphere with Kubernetes Supervisor Cluster.
  • If you connect to a vSphere 7.0 instance and the vSphere with Kubernetes feature is not enabled, the CLI informs you that deploying a Tanzu Kubernetes Grid management cluster is possible but not recommended. You can either quit the installation and enable the vSphere with Kubernetes feature, or you can choose to continue with the deployment of the management cluster. For the best experience of Kubernetes on vSphere 7.0, you should enable the vSphere with Kubernetes feature and use the built-in Supervisor Cluster, rather than a Tanzu Kubernetes Grid management cluster.

What to Do Next