This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

This topic describes how to create a Kubernetes cluster with VMware Tanzu Kubernetes Grid Integrated Edition using the TKGI Command Line Interface (TKGI CLI).

Overview

Use the TKGI CLI to create Kubernetes clusters in your Tanzu Kubernetes Grid Integrated Edition environment.

To create an Tanzu Kubernetes Grid Integrated Edition Kubernetes cluster, do the following:

The tkgi create-cluster command creates a Kubernetes cluster with TKGI compatibility matching the TKGI version of the current TKGI control plane.

Configure Cluster Access

Cluster access configuration differs by the type of Tanzu Kubernetes Grid Integrated Edition deployment.

vSphere with NSX-T

Tanzu Kubernetes Grid Integrated Edition deploys a load balancer automatically when clusters are created. The load balancer is configured automatically when workloads are being deployed on these Kubernetes clusters. For more information, see Load Balancers in Tanzu Kubernetes Grid Integrated Edition Deployments with NSX-T.

Note: For a complete list of the objects that Tanzu Kubernetes Grid Integrated Edition creates by default when you create a Kubernetes cluster on vSphere with NSX-T, see vSphere with NSX-T Cluster Objects.

GCP, AWS, Azure, or vSphere without NSX-T

When you create a Kubernetes cluster, you must configure external access to the cluster by creating an external TCP or HTTPS load balancer. This load balancer allows you to run TKGI CLI commands on the cluster from your local workstation. For more information, see Load Balancers in Tanzu Kubernetes Grid Integrated Edition Deployments without NSX-T.

You can configure any load balancer of your choice. If you use GCP, AWS, Azure, or vSphere without NSX-T, you can create a load balancer using your cloud provider console.

For more information about configuring a Tanzu Kubernetes Grid Integrated Edition cluster load balancer, see the following:

Create the Tanzu Kubernetes Grid Integrated Edition cluster load balancer before you create the cluster. Use the load balancer IP address as the external hostname, and then point the load balancer to the IP address of the control plane virtual machine (VM) after cluster creation. If the cluster has multiple control plane nodes, you must configure the load balancer to point to all control plane VMs for the cluster.

If you are creating a cluster in a non-production environment, you can choose to create a cluster without a load balancer. Create a DNS entry that points to the IP address of the cluster’s control plane VM after cluster creation.

To locate the IP addresses and VM IDs of the control plane VMs, see Identify the Kubernetes Cluster Control Plane VM below.

Create a Kubernetes Cluster

Perform the following steps:

  1. Grant cluster access to a new or existing user in UAA. For more information, see the Grant Tanzu Kubernetes Grid Integrated Edition Access to an Individual User section of Managing Tanzu Kubernetes Grid Integrated Edition Users with UAA.

  2. On the command line, run the following command to log in:

     tkgi login -a TKGI-API -u USERNAME -k 
    Where:

    • TKGI-API is the domain name for the TKGI API that you entered in Ops Manager > Tanzu Kubernetes Grid Integrated Edition > TKGI API > API Hostname (FQDN). For example, api.tkgi.example.com.
    • USERNAME is your user name.

      See Logging in to Tanzu Kubernetes Grid Integrated Edition for more information about the tkgi login command.

      Note: If your operator has configured Tanzu Kubernetes Grid Integrated Edition to use a SAML identity provider, you must include an additional SSO flag to use the above command. For information about the SSO flags, see the section for the above command in TKGI CLI. For information about configuring SAML, see Connecting Tanzu Kubernetes Grid Integrated Edition to a SAML Identity Provider

  3. (Optional) To configure any of the following for a cluster, create a config file:

    • Custom CAs: For more information, see Using a Custom CA for Kubernetes Clusters.
    • VM Extensions: For more information, see Using BOSH VM Extensions.
    • Proxy settings: For more information, see Configure Cluster Proxies.
    • group Managed Service Account (gMSA) settings: For more information, see Authenticate Windows Clusters with Active Directory.
    • Container runtime: By default, new clusters are created using the containerd container runtime. To explicitly configure your cluster to use a specific container runtime, create either a JSON or YAML config file with the following content:

      • JSON formatted configuration file:

        {
            "runtime": "RUNTIME-NAME"
        }
        
      • YAML formatted configuration file:

        ---
        runtime: RUNTIME-NAME
        

        Where RUNTIME-NAME specifies either docker or containerd container runtimes.
        For more information, see Containerd Container Runtime.

      Note: You must manage and monitor Docker and containerd runtime clusters differently. For more information, see Breaking Changes in the TKGI v1.12 Release Notes.

    • Lock Container runtime: To explicitly lock your cluster to your specified container runtime, include the following content in your cluster creation config file:

      • JSON formatted configuration file:

        {
            "lock_container_runtime": true
        }
        
      • YAML formatted configuration file:

        ---
        lock_container_runtime: true
        

      By default, clusters using a Docker container runtime will be switched to the containerd container runtime when upgraded to TKGI v1.14. Include the lock_container_runtime parameter in your configuration file to lock a cluster to the Docker-runtime so that you can manually switch the cluster to the containerd-runtime at your leisure.

      All Docker-runtime clusters must be switched to the containerd-runtime prior to upgrading to TKGI v1.15.

      Warning: Clusters that are not locked to the Docker-runtime will automatically switch to the containerd container runtime during the TKGI v1.14 cluster upgrade.

  4. To create a cluster, run the following command:

    tkgi create-cluster CLUSTER-NAME \
    --external-hostname HOSTNAME \
    --plan PLAN-NAME \
    [--num-nodes WORKER-NODES] \
    [--network-profile NETWORK-PROFILE-NAME] \
    [--kubernetes-profile KUBERNETES-PROFILE-NAME] \
    [--config-file CONFIG-FILE-NAME] \
    [--tags TAGS]
    

    Where:

    • CLUSTER-NAME is your unique name for your cluster.

      Note: The CLUSTER-NAME must not contain special characters such as &. The TKGI CLI does not validate the presence of special characters in the CLUSTER-NAME string, but cluster creation fails if one or more special characters are present.

      Use only lowercase characters when naming your cluster if you manage your clusters with Tanzu Mission Control (TMC). Clusters with names that include an uppercase character cannot be attached to TMC.

    • HOSTNAME is your external hostname for your cluster. You can use any fully qualified domain name (FQDN) or IP address you own. For example, my-cluster.example.com or 10.0.0.1. If you created an external load balancer, use its DNS hostname. If you are using NSX-T, you can pre-provision the IP address to use for the Kubernetes API server load balancer using an available IP address from the floating IP pool and define a network profile to perform DNS lookup, or specify the IP address to use for load balancer on the command line. See Defining Network Profile for DNS Lookup of Pre-Provisioned IP Addresses for details.
    • PLAN-NAME is the plan for your cluster. Run tkgi plans to list your available plans.
    • (Optional) WORKER-NODES is the number of worker nodes for the cluster.
    • (Optional) (NSX-T only) NETWORK-PROFILE-NAME is the network profile to use for the cluster. See Using Network Profiles (NSX-T Only) for more information.
    • (Optional) KUBERNETES-PROFILE-NAME is the Kubernetes profile to use for the cluster. See Using Kubernetes Profiles for more information.
    • (Optional) CONFIG-FILE-NAME is the configuration file to use for the cluster.
    • (Optional) (Azure and vSphere only) TAGS are the labels and metadata values to apply to the VMs created in the cluster. Specify the tags as key:value pairs. For more information about tagging see Tagging Clusters.

    For example:

    $ tkgi create-cluster my-cluster 
    –external-hostname my-cluster.example.com
    –plan large –num-nodes 3

    Note: It can take up to 30 minutes to create a cluster.


    For high availability, create clusters with a minimum of three worker nodes, or two per AZ if you intend to use PersistentVolumes (PVs). For example, if you deploy across three AZs, you should have six worker nodes. For more information about PVs, see PersistentVolumes in Maintaining Workload Uptime. Provisioning a minimum of three worker nodes, or two nodes per AZ is also recommended for stateless workloads.

    The maximum value you can specify is configured in the Plans pane of the Tanzu Kubernetes Grid Integrated Edition tile. If you do not specify a number of worker nodes, the cluster is deployed with the default number, which is also configured in the Plans pane. For more information, see the Installing Tanzu Kubernetes Grid Integrated Edition topic for your IaaS, such as Installing Tanzu Kubernetes Grid Integrated Edition on vSphere.

  5. To track cluster creation, run the following command:

    tkgi cluster CLUSTER-NAME
    

    WhereCLUSTER-NAME is the unique name for your cluster.

    For example:

    $ tkgi cluster my-cluster Name:                     my-cluster Plan Name:                large UUID:                     01a234bc-d56e-7f89-01a2-3b4cde5f6789 Last Action:              CREATE Last Action State:        succeeded Last Action Description:  Instance provisioning completed Kubernetes Master Host:   my-cluster.example.com Kubernetes Master Port:   8443 Worker Instances:         3 Kubernetes Master IP(s):  192.168.20.7 

  6. If the Last Action State value is error, troubleshoot by performing the following procedure:

    1. Log in to the BOSH Director.
    2. Run the following command:

      bosh tasks
      

    For more information, see Advanced Troubleshooting with the BOSH CLI.

  7. Depending on your deployment:

    • For vSphere with NSX-T, choose one of the following:
      • Specify the hostname or FQDN and register the FQDN with the IP provided by Tanzu Kubernetes Grid Integrated Edition after cluster deployment. You can do this using resolv.conf or via DNS registration.
      • Specify a temporary placeholder value for FQDN, then replace the FQDN in the kubeconfig with the IP address assigned to the load balancer dedicated to the cluster.

        To retrieve the IP address to access the Kubernetes API and UI services, use the tkgi cluster CLUSTER-NAME command.
    • For vSphere without NSX-T, AWS, and Azure, configure external access to the cluster’s control plane nodes using either DNS records or an external load balancer. Use the output from the tkgi cluster command to locate the control plane node IP addresses and ports.
    • For GCP, use the output from the tkgi cluster command to locate the control plane node IP addresses and ports, and then continue to Creating and Configuring a GCP Load Balancer for Tanzu Kubernetes Grid Integrated Edition Clusters in Configuring a GCP Load Balancer for Tanzu Kubernetes Grid Integrated Edition Clusters.

      Note: For clusters with multiple control plane node VMs, health checks on port 8443 are recommended.

  8. To access your cluster, run the following command:

    tkgi get-credentials CLUSTER-NAME
    

    Where CLUSTER-NAME is the unique name for your cluster.

    For example:

    $ tkgi get-credentials tkgi-example-cluster

    Fetching credentials for cluster tkgi-example-cluster. Context set for cluster tkgi-example-cluster.

    You can now switch between clusters by using: $kubectl config use-context <cluster-name>

    The tkgi get-credentials command creates a local kubeconfig that allows you to manage the cluster. For more information about the tkgi get-credentials command, see Retrieving Cluster Credentials and Configuration.

    Note: If your operator has configured Tanzu Kubernetes Grid Integrated Edition to use a SAML identity provider, you must include an additional SSO flag to use the above command. For information about the SSO flags, see the section for the above command in TKGI CLI. For information about configuring SAML, see Connecting Tanzu Kubernetes Grid Integrated Edition to a SAML Identity Provider

  9. To confirm you can access your cluster using the Kubernetes CLI, run the following command:

    kubectl cluster-info
    

    See Managing Tanzu Kubernetes Grid Integrated Edition for information about checking cluster health and viewing cluster logs.

  10. To review the status, container runtime or other information about the nodes in your cluster, run the following command:

    kubectl get nodes -o wide
    

Identify Kubernetes Cluster Control Plane VMs

Note: This section applies only to Tanzu Kubernetes Grid Integrated Edition deployments on GCP or on vSphere without NSX-T. Skip this section if your Tanzu Kubernetes Grid Integrated Edition deployment is on vSphere with NSX-T, AWS, or Azure. For more information, see Load Balancers in Tanzu Kubernetes Grid Integrated Edition.

To reconfigure the load balancer or DNS record for an existing cluster, you may need to locate VM ID and IP address information for the cluster’s control plane VMs. Use the information you locate in this procedure when configuring your load balancer backend.

To locate the IP addresses and VM IDs for the control plane VMs of an existing cluster, do the following:

  1. On the command line, run the following command to log in:

     tkgi login -a TKGI-API -u USERNAME -k 
    Where:

    • TKGI-API is the domain name for the TKGI API that you entered in Ops Manager > Tanzu Kubernetes Grid Integrated Edition > TKGI API > API Hostname (FQDN). For example, api.tkgi.example.com.
    • USERNAME is your user name.

      See Logging in to Tanzu Kubernetes Grid Integrated Edition for more information about the tkgi login command.

      Note: If your operator has configured Tanzu Kubernetes Grid Integrated Edition to use a SAML identity provider, you must include an additional SSO flag to use the above command. For information about the SSO flags, see the section for the above command in TKGI CLI. For information about configuring SAML, see Connecting Tanzu Kubernetes Grid Integrated Edition to a SAML Identity Provider

  2. To locate the cluster ID and control plane node IP addresses, run the following command:

    tkgi cluster CLUSTER-NAME
    

    Where CLUSTER-NAME is the unique name for your cluster.

    From the output of this command, record the following items:
    * UUID: This value is your cluster ID.
    * Kubernetes Master IP(s): This value lists the IP addresses of all control plane nodes in the cluster.

  3. Gather credential and IP address information for your BOSH Director.

  4. To log in to the BOSH Director, perform the following:

    1. SSH into the Ops Manager VM.
    2. Log in to the BOSH Director by using the BOSH CLI from the Ops Manager VM.

    For information on how to complete these steps, see Advanced Troubleshooting with the BOSH CLI .

  5. To identify the name of your cluster deployment, run the following command:

    bosh -e tkgi deployments
    

    Your cluster deployment name begins with service-instance and includes the UUID you located in a previous step.

  6. To identify the control plane VM IDs by listing the VMs in your cluster, run the following command:

    bosh -e tkgi -d CLUSTER-SI-ID vms
    

    Where CLUSTER-SI-ID is your cluster service instance ID which begins with service-instance and includes the UUID you previously located.

    For example:

    $ bosh -e tkgi -d service-instance-aa1234567bc8de9f0a1c vms
    Your control plane VM IDs are displayed in the VM CID column.

  7. Use the control plane VM IDs and other information you gathered in this procedure to configure your load balancer backend. For example, if you use GCP, use the control plane VM IDs retrieved during the previous step in Creating and Configuring a GCP Load Balancer for Tanzu Kubernetes Grid Integrated Edition Clusters.

Next Steps

If you did not tag your new cluster during creation, tag your cluster’s VMs now. If your Tanzu Kubernetes Grid Integrated Edition deployment is on:

  • AWS: Tag your subnets with your new cluster’s unique identifier before adding the subnets to the Tanzu Kubernetes Grid Integrated Edition workload load balancer. After you complete the Create a Kubernetes Cluster procedure above, follow the instructions in AWS Prerequisites in Deploying and Exposing Basic Linux Workloads.
  • Azure, vSphere, or vSphere with NSX-T: You can use the TKGI CLI to tag clusters by following the steps in Tagging Clusters.
  • GCP: You can tag your clusters using your IaaS-provided management console.
check-circle-line exclamation-circle-line close-line
Scroll to top icon