This topic describes how to create a Kubernetes cluster with VMware Tanzu Kubernetes Grid Integrated Edition using the TKGI Command Line Interface (TKGI CLI).
Use the TKGI CLI to create Kubernetes clusters in your Tanzu Kubernetes Grid Integrated Edition environment.
To create an Tanzu Kubernetes Grid Integrated Edition Kubernetes cluster, do the following:
The tkgi create-cluster
command creates a Kubernetes cluster with TKGI compatibility matching the TKGI version of the current TKGI control plane.
Cluster access configuration differs by the type of Tanzu Kubernetes Grid Integrated Edition deployment.
Tanzu Kubernetes Grid Integrated Edition deploys a load balancer automatically when clusters are created. The load balancer is configured automatically when workloads are being deployed on these Kubernetes clusters. For more information, see Load Balancers in Tanzu Kubernetes Grid Integrated Edition Deployments with NSX-T.
Note: For a complete list of the objects that Tanzu Kubernetes Grid Integrated Edition creates by default when you create a Kubernetes cluster on vSphere with NSX-T, see vSphere with NSX-T Cluster Objects.
When you create a Kubernetes cluster, you must configure external access to the cluster by creating an external TCP or HTTPS load balancer. This load balancer allows you to run TKGI CLI commands on the cluster from your local workstation. For more information, see Load Balancers in Tanzu Kubernetes Grid Integrated Edition Deployments without NSX-T.
You can configure any load balancer of your choice. If you use GCP, AWS, Azure, or vSphere without NSX-T, you can create a load balancer using your cloud provider console.
For more information about configuring a Tanzu Kubernetes Grid Integrated Edition cluster load balancer, see the following:
Create the Tanzu Kubernetes Grid Integrated Edition cluster load balancer before you create the cluster. Use the load balancer IP address as the external hostname, and then point the load balancer to the IP address of the control plane virtual machine (VM) after cluster creation. If the cluster has multiple control plane nodes, you must configure the load balancer to point to all control plane VMs for the cluster.
If you are creating a cluster in a non-production environment, you can choose to create a cluster without a load balancer. Create a DNS entry that points to the IP address of the cluster’s control plane VM after cluster creation.
To locate the IP addresses and VM IDs of the control plane VMs, see Identify the Kubernetes Cluster Control Plane VM below.
Perform the following steps:
Grant cluster access to a new or existing user in UAA. For more information, see the Grant Tanzu Kubernetes Grid Integrated Edition Access to an Individual User section of Managing Tanzu Kubernetes Grid Integrated Edition Users with UAA.
On the command line, run the following command to log in:
tkgi login -a TKGI-API -u USERNAME -kWhere:
TKGI-API
is the domain name for the TKGI API that you entered in Ops Manager > Tanzu Kubernetes Grid Integrated Edition > TKGI API > API Hostname (FQDN). For example, api.tkgi.example.com
.USERNAME
is your user name. tkgi login
command. Note: If your operator has configured Tanzu Kubernetes Grid Integrated Edition to use a SAML identity provider, you must include an additional SSO flag to use the above command. For information about the SSO flags, see the section for the above command in TKGI CLI. For information about configuring SAML, see Connecting Tanzu Kubernetes Grid Integrated Edition to a SAML Identity Provider
(Optional) To configure any of the following for a cluster, create a config file:
Container runtime: By default, new clusters are created using the containerd container runtime. To explicitly configure your cluster to use a specific container runtime, create either a JSON or YAML config file with the following content:
JSON formatted configuration file:
{
"runtime": "RUNTIME-NAME"
}
YAML formatted configuration file:
---
runtime: RUNTIME-NAME
Where RUNTIME-NAME
specifies either docker
or containerd
container runtimes.
For more information, see Containerd Container Runtime.
Note: You must manage and monitor Docker and containerd runtime clusters differently. For more information, see Breaking Changes in the TKGI v1.12 Release Notes.
Lock Container runtime: To explicitly lock your cluster to your specified container runtime, include the following content in your cluster creation config file:
JSON formatted configuration file:
{
"lock_container_runtime": true
}
YAML formatted configuration file:
---
lock_container_runtime: true
By default, clusters using a Docker container runtime will be switched to the containerd container runtime when upgraded to TKGI v1.14. Include the lock_container_runtime
parameter in your configuration file to lock a cluster to the Docker-runtime so that you can manually switch the cluster to the containerd-runtime at your leisure.
All Docker-runtime clusters must be switched to the containerd-runtime prior to upgrading to TKGI v1.15.
Warning: Clusters that are not locked to the Docker-runtime will automatically switch to the containerd container runtime during the TKGI v1.14 cluster upgrade.
To create a cluster, run the following command:
tkgi create-cluster CLUSTER-NAME \
--external-hostname HOSTNAME \
--plan PLAN-NAME \
[--num-nodes WORKER-NODES] \
[--network-profile NETWORK-PROFILE-NAME] \
[--kubernetes-profile KUBERNETES-PROFILE-NAME] \
[--config-file CONFIG-FILE-NAME] \
[--tags TAGS]
Where:
CLUSTER-NAME
is your unique name for your cluster. Note: The CLUSTER-NAME
must not contain special characters such as &
. The TKGI CLI does not validate the presence of special characters in the CLUSTER-NAME
string, but cluster creation fails if one or more special characters are present.
Use only lowercase characters when naming your cluster if you manage your clusters with Tanzu Mission Control (TMC). Clusters with names that include an uppercase character cannot be attached to TMC.
HOSTNAME
is your external hostname for your cluster. You can use any fully qualified domain name (FQDN) or IP address you own. For example, my-cluster.example.com
or 10.0.0.1
. If you created an external load balancer, use its DNS hostname. If you are using NSX-T, you can pre-provision the IP address to use for the Kubernetes API server load balancer using an available IP address from the floating IP pool and define a network profile to perform DNS lookup, or specify the IP address to use for load balancer on the command line. See Defining Network Profile for DNS Lookup of Pre-Provisioned IP Addresses for details.PLAN-NAME
is the plan for your cluster. Run tkgi plans
to list your available plans.WORKER-NODES
is the number of worker nodes for the cluster.NETWORK-PROFILE-NAME
is the network profile to use for the cluster. See Using Network Profiles (NSX-T Only) for more information.KUBERNETES-PROFILE-NAME
is the Kubernetes profile to use for the cluster. See Using Kubernetes Profiles for more information.CONFIG-FILE-NAME
is the configuration file to use for the cluster.TAGS
are the labels and metadata values to apply to the VMs created in the cluster. Specify the tags as key:value
pairs. For more information about tagging see Tagging Clusters.For example:
$ tkgi create-cluster my-cluster
–external-hostname my-cluster.example.com
–plan large –num-nodes 3
Note: It can take up to 30 minutes to create a cluster.
To track cluster creation, run the following command:
tkgi cluster CLUSTER-NAME
WhereCLUSTER-NAME
is the unique name for your cluster.
For example:
$ tkgi cluster my-cluster Name: my-cluster Plan Name: large UUID: 01a234bc-d56e-7f89-01a2-3b4cde5f6789 Last Action: CREATE Last Action State: succeeded Last Action Description: Instance provisioning completed Kubernetes Master Host: my-cluster.example.com Kubernetes Master Port: 8443 Worker Instances: 3 Kubernetes Master IP(s): 192.168.20.7
If the Last Action State value is error
, troubleshoot by performing the following procedure:
Run the following command:
bosh tasks
For more information, see Advanced Troubleshooting with the BOSH CLI.
Depending on your deployment:
resolv.conf
or via DNS registration.kubeconfig
with the IP address assigned to the load balancer dedicated to the cluster.tkgi cluster CLUSTER-NAME
command.tkgi cluster
command to locate the control plane node IP addresses and ports.tkgi cluster
command to locate the control plane node IP addresses and ports, and then continue to Creating and Configuring a GCP Load Balancer for Tanzu Kubernetes Grid Integrated Edition Clusters in Configuring a GCP Load Balancer for Tanzu Kubernetes Grid Integrated Edition Clusters. Note: For clusters with multiple control plane node VMs, health checks on port 8443 are recommended.
To access your cluster, run the following command:
tkgi get-credentials CLUSTER-NAME
Where CLUSTER-NAME
is the unique name for your cluster.
For example:
$ tkgi get-credentials tkgi-example-clusterTheFetching credentials for cluster tkgi-example-cluster. Context set for cluster tkgi-example-cluster.
You can now switch between clusters by using: $kubectl config use-context <cluster-name>
tkgi get-credentials
command creates a local kubeconfig
that allows you to manage the cluster. For more information about the tkgi get-credentials
command, see Retrieving Cluster Credentials and Configuration. Note: If your operator has configured Tanzu Kubernetes Grid Integrated Edition to use a SAML identity provider, you must include an additional SSO flag to use the above command. For information about the SSO flags, see the section for the above command in TKGI CLI. For information about configuring SAML, see Connecting Tanzu Kubernetes Grid Integrated Edition to a SAML Identity Provider
To confirm you can access your cluster using the Kubernetes CLI, run the following command:
kubectl cluster-info
See Managing Tanzu Kubernetes Grid Integrated Edition for information about checking cluster health and viewing cluster logs.
To review the status, container runtime or other information about the nodes in your cluster, run the following command:
kubectl get nodes -o wide
Note: This section applies only to Tanzu Kubernetes Grid Integrated Edition deployments on GCP or on vSphere without NSX-T. Skip this section if your Tanzu Kubernetes Grid Integrated Edition deployment is on vSphere with NSX-T, AWS, or Azure. For more information, see Load Balancers in Tanzu Kubernetes Grid Integrated Edition.
To reconfigure the load balancer or DNS record for an existing cluster, you may need to locate VM ID and IP address information for the cluster’s control plane VMs. Use the information you locate in this procedure when configuring your load balancer backend.To locate the IP addresses and VM IDs for the control plane VMs of an existing cluster, do the following:
On the command line, run the following command to log in:
tkgi login -a TKGI-API -u USERNAME -kWhere:
TKGI-API
is the domain name for the TKGI API that you entered in Ops Manager > Tanzu Kubernetes Grid Integrated Edition > TKGI API > API Hostname (FQDN). For example, api.tkgi.example.com
.USERNAME
is your user name. tkgi login
command. Note: If your operator has configured Tanzu Kubernetes Grid Integrated Edition to use a SAML identity provider, you must include an additional SSO flag to use the above command. For information about the SSO flags, see the section for the above command in TKGI CLI. For information about configuring SAML, see Connecting Tanzu Kubernetes Grid Integrated Edition to a SAML Identity Provider
To locate the cluster ID and control plane node IP addresses, run the following command:
tkgi cluster CLUSTER-NAME
Where CLUSTER-NAME
is the unique name for your cluster.
From the output of this command, record the following items:
* UUID: This value is your cluster ID.
* Kubernetes Master IP(s): This value lists the IP addresses of all control plane nodes in the cluster.
Gather credential and IP address information for your BOSH Director.
To log in to the BOSH Director, perform the following:
For information on how to complete these steps, see Advanced Troubleshooting with the BOSH CLI .
To identify the name of your cluster deployment, run the following command:
bosh -e tkgi deployments
Your cluster deployment name begins with service-instance
and includes the UUID you located in a previous step.
To identify the control plane VM IDs by listing the VMs in your cluster, run the following command:
bosh -e tkgi -d CLUSTER-SI-ID vms
Where CLUSTER-SI-ID
is your cluster service instance ID which begins with service-instance
and includes the UUID
you previously located.
For example:
$ bosh -e tkgi -d service-instance-aa1234567bc8de9f0a1c vmsYour control plane VM IDs are displayed in the VM CID column.
If you did not tag your new cluster during creation, tag your cluster’s VMs now. If your Tanzu Kubernetes Grid Integrated Edition deployment is on: