After you have deployed Tanzu Kubernetes clusters, you use the tanzu cluster list
and tanzu cluster kubeconfig get
commands to obtain the list of running clusters and their credentials. Then, you can connect to the clusters by using kubectl
and start working with your clusters.
To see lists of Tanzu Kubernetes clusters and the management clusters that manage them, use the tanzu cluster list
command.
To list all of the Tanzu Kubernetes clusters that are running in the default
namespace of this management cluster, run the tanzu cluster list
command.
tanzu cluster list
The output lists all of the Tanzu Kubernetes clusters that are managed by the management cluster. The output lists the cluster names, the namespace in which they are running, their current status, the numbers of actual and requested control plane and worker nodes, and the Kubernetes version that the cluster is running.
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
vsphere-cluster default running 1/1 1/1 v1.22.9+vmware.1 <none>
vsphere-cluster2 default running 1/1 1/1 v1.22.9+vmware.1 <none>
my-vsphere-tkc default running 1/1 1/1 v1.22.9+vmware.1 <none>
Clusters can be in the following states:
creating
: The control plane is being createdcreateStalled
: The process of creating control plane has stalleddeleting
: The cluster is in the process of being deletedfailed
: The creation of the control plane has failedrunning
: The control plane has initialized fullyupdating
: The cluster is in the process of rolling out an update or is scaling nodesupdateFailed
: The cluster update process failedupdateStalled
: The cluster update process has stalledIf a cluster is in a stalled state, check that there is network connectivity to the external registry, make sure that there are sufficient resources on the target platform for the operation to complete, and ensure that DHCP is issuing IPv4 addresses correctly.
To list only those clusters that are running in a given namespace, specify the --namespace
option.
tanzu cluster list --namespace=my-namespace
To include the current management cluster in the output of tanzu cluster list
, specify the --include-management-cluster
option.
tanzu cluster list --include-management-cluster
You can see that the management cluster is running in the tkg-system
namespace and has the management
role.
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
vsphere-cluster default running 1/1 1/1 v1.22.9+vmware.1 <none>
vsphere-cluster2 default running 3/3 3/3 v1.22.9+vmware.1 <none>
vsphere-mgmt-cluster tkg-system running 1/1 1/1 v1.22.9+vmware.1 management
To see all of the management clusters and change the context of the Tanzu CLI to a different management cluster, run the tanzu login
command. See List Management Clusters and Change Context for more information.
You can export the details of the clusters that are managed by a management cluster in either JSON or YAML format. You can save the JSON or YAML to a file so that you can use it in scripts to run bulk operations on clusters.
To export cluster details as JSON, run tanzu cluster list
with the --output
option, specifying json
.
tanzu cluster list --output json
The output shows the cluster information as JSON:
[
{
"name": "vsphere-cluster",
"namespace": "default",
"status": "running",
"plan": "",
"controlplane": "1/1",
"workers": "1/1",
"kubernetes": "v1.22.9+vmware.1",
"roles": []
},
{
"name": "vsphere-cluster2",
"namespace": "default",
"status": "running",
"plan": "",
"controlplane": "3/3",
"workers": "3/3",
"kubernetes": "v1.22.9+vmware.1",
"roles": []
}
]
To export cluster details as YAML, run tanzu cluster list
with the --output
option, specifying yaml
.
tanzu cluster list --output yaml
The output shows the cluster information as YAML:
- name: vsphere-cluster
namespace: default
status: running
plan: ""
controlplane: 1/1
workers: 1/1
kubernetes: v1.22.9+vmware.1
roles: []
- name: vsphere-cluster2
namespace: default
status: running
plan: ""
controlplane: 3/3
workers: 3/3
kubernetes: v1.22.9+vmware.1
roles: []
Save the output as a file.
tanzu cluster list --output json > clusters.json
tanzu cluster list --output yaml > clusters.yaml
For how to save the details of multiple management clusters, including their context and kubeconfig
files, see Save Management Cluster Details to a File.
kubeconfig
After you create a Tanzu Kubernetes cluster, you can obtain its cluster, context, and user kubeconfig
settings by running the tanzu cluster kubeconfig get
command, specifying the name of the cluster.
By default, the command adds the cluster’s kubeconfig
settings to your current kubeconfig
file.
To generate a standalone admin kubeconfig
file with embedded credentials, add the --admin
option. This kubeconfig
file grants its user full access to the cluster’s resources and lets them access the cluster without logging in to an identity provider.
Important: If identity management is not configured on the cluster, you must specify the --admin
option.
tanzu cluster kubeconfig get my-cluster --admin
You should see the following output:
You can now access the cluster by running 'kubectl config use-context my-cluster-admin@my-cluster'
If identity management and role-based access control (RBAC) are configured on a cluster, you can generate a standard, non-admin kubeconfig
that requires the user to authenticate with your external identity provider, and grants them access to cluster resources based on their assigned roles. In this case, run tanzu cluster kubeconfig get
without the --admin
option.
tanzu cluster kubeconfig get my-cluster
You should see the following output:
You can now access the cluster by running 'kubectl config use-context tanzu-cli-my-cluster@my-cluster'
If the cluster is running in a namespace other than the default
namespace, you must specify the --namespace
option to get the credentials of that cluster.
tanzu cluster kubeconfig get my-cluster --namespace=my-namespace
To save the configuration information in a standalone kubeconfig
file, for example to distribute them to developers, specify the --export-file
option. This kubeconfig
file requires the user to authenticate with an external identity provider, and grants access to cluster resources based on their assigned roles.
tanzu cluster kubeconfig get my-cluster --export-file my-cluster-credentials
Important: By default, unless you specify the --export-file
option to save the kubeconfig
for a cluster to a specific file, the credentials for all clusters that you deploy from the Tanzu CLI are added to a shared kubeconfig
file. If you delete the shared kubeconfig
file, all clusters become unusable.
To retrieve a kubeconfig
for a management cluster, run tanzu mc kubeconfig get
as described in Retrieve Management Cluster kubeconfig
.
After you have added the credentials to your kubeconfig
, you can connect to the cluster by using kubectl
.
kubectl config use-context my-cluster-admin@my-cluster
Use kubectl
to see the status of the nodes in the cluster.
kubectl get nodes
For example, if you deployed the my-prod-cluster
with the prod
plan and the default 3 control plane nodes and worker nodes, you see the following output.
NAME STATUS ROLES AGE VERSION
my-prod-cluster-control-plane-gp4rl Ready master 8m51s v1.22.9+vmware.1
my-prod-cluster-control-plane-n8bh7 Ready master 5m58s v1.22.9+vmware.1
my-prod-cluster-control-plane-xflrg Ready master 3m39s v1.22.9+vmware.1
my-prod-cluster-md-0-6946bcb48b-dk7m6 Ready <none> 6m45s v1.22.9+vmware.1
my-prod-cluster-md-0-6946bcb48b-dq8s9 Ready <none> 7m23s v1.22.9+vmware.1
my-prod-cluster-md-0-6946bcb48b-nrdlp Ready <none> 7m8s v1.22.9+vmware.1
Because networking with Antrea is enabled by default in Tanzu Kubernetes clusters, all clusters are in the Ready
state without requiring any additional configuration.
Use kubectl
to see the status of the pods running in the cluster.
kubectl get pods -A
The example below shows the pods running in the kube-system
namespace in the my-prod-cluster
cluster on vSphere.
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system antrea-agent-2mw42 2/2 Running 0 4h41m
kube-system antrea-agent-4874z 2/2 Running 1 4h45m
kube-system antrea-agent-9qfr6 2/2 Running 0 4h48m
kube-system antrea-agent-cf7cf 2/2 Running 0 4h46m
kube-system antrea-agent-j84mz 2/2 Running 0 4h46m
kube-system antrea-agent-rklbg 2/2 Running 0 4h46m
kube-system antrea-controller-5d594c5cc7-5pttm 1/1 Running 0 4h48m
kube-system coredns-5bcf65484d-7dp8d 1/1 Running 0 4h48m
kube-system coredns-5bcf65484d-pzw8p 1/1 Running 0 4h48m
kube-system etcd-my-prod-cluster-control-plane-frsgd 1/1 Running 0 4h48m
kube-system etcd-my-prod-cluster-control-plane-khld4 1/1 Running 0 4h44m
kube-system etcd-my-prod-cluster-control-plane-sjvx7 1/1 Running 0 4h41m
kube-system kube-apiserver-my-prod-cluster-control-plane-frsgd 1/1 Running 0 4h48m
kube-system kube-apiserver-my-prod-cluster-control-plane-khld4 1/1 Running 1 4h45m
kube-system kube-apiserver-my-prod-cluster-control-plane-sjvx7 1/1 Running 0 4h41m
kube-system kube-controller-manager-my-prod-cluster-control-plane-frsgd 1/1 Running 1 4h48m
kube-system kube-controller-manager-my-prod-cluster-control-plane-khld4 1/1 Running 0 4h45m
kube-system kube-controller-manager-my-prod-cluster-control-plane-sjvx7 1/1 Running 0 4h41m
kube-system kube-proxy-hzqlt 1/1 Running 0 4h48m
kube-system kube-proxy-jr4w6 1/1 Running 0 4h45m
kube-system kube-proxy-lx8bp 1/1 Running 0 4h46m
kube-system kube-proxy-rzbgh 1/1 Running 0 4h46m
kube-system kube-proxy-s684n 1/1 Running 0 4h41m
kube-system kube-proxy-z9v9t 1/1 Running 0 4h46m
kube-system kube-scheduler-my-prod-cluster-control-plane-frsgd 1/1 Running 1 4h48m
kube-system kube-scheduler-my-prod-cluster-control-plane-khld4 1/1 Running 0 4h45m
kube-system kube-scheduler-my-prod-cluster-control-plane-sjvx7 1/1 Running 0 4h41m
kube-system kube-vip-my-prod-cluster-control-plane-frsgd 1/1 Running 1 4h48m
kube-system kube-vip-my-prod-cluster-control-plane-khld4 1/1 Running 0 4h45m
kube-system kube-vip-my-prod-cluster-control-plane-sjvx7 1/1 Running 0 4h41m
kube-system vsphere-cloud-controller-manager-4nlsw 1/1 Running 0 4h41m
kube-system vsphere-cloud-controller-manager-gw7ww 1/1 Running 2 4h48m
kube-system vsphere-cloud-controller-manager-vp968 1/1 Running 0 4h44m
kube-system vsphere-csi-controller-555595b64c-l82kb 5/5 Running 3 4h48m
kube-system vsphere-csi-node-5zq47 3/3 Running 0 4h41m
kube-system vsphere-csi-node-8fzrg 3/3 Running 0 4h46m
kube-system vsphere-csi-node-8zs5l 3/3 Running 0 4h45m
kube-system vsphere-csi-node-f2v55 3/3 Running 0 4h46m
kube-system vsphere-csi-node-khtwv 3/3 Running 0 4h48m
kube-system vsphere-csi-node-shtqj 3/3 Running 0 4h46m
You can see from the example above that the following services are running in the my-prod-cluster
cluster:
coredns
, for DNSetcd
, for key-value storagekube-apiserver
, the Kubernetes API serverkube-proxy
, the Kubernetes network proxykube-scheduler
, for scheduling and availabilityvsphere-cloud-controller-manager
, the Kubernetes cloud provider for vSpherekube-vip
, load balancing services for the Cluster API servervsphere-csi-controller
and vsphere-csi-node
, the container storage interface for vSphereA standard, non-admin user can retrieve a workload cluster’s kubeconfig
by using the Tanzu CLI, as described in this section. This workflow is different from how an admin user, who created the cluster’s management cluster, retrieves this information by using the system that was used to create the management cluster. Admin users also can use this procedure if they are retrieving the kubeconfig
from a system different from the one that was used to create the management cluster.
Before you perform this task, ensure that:
On the Tanzu CLI, run the following command:
tanzu login --endpoint https://MANAGEMENT-CLUSTER-CONTROL-PLANE-ENDPOINT:PORT --name SERVER-NAME
Where:
PORT
is 6443
. If the platform administrator set CLUSTER_API_SERVER_PORT
or VSPHERE_CONTROL_PLANE_ENDPOINT_PORT
when deploying the cluster, use the port number defined in the variable.SERVER-NAME
is the name of your management cluster server.If identity management is configured on the management cluster, the login screen for the identity management provider (LDAP or OIDC) opens in your default browser.
LDAPS:
OIDC:
Log in to the identity management provider.
On the Tanzu CLI, run the following command to obtain the workload cluster context:
tanzu cluster kubeconfig get MY-WORKLOAD-CLUSTER
For more information on obtaining the workload cluster context, see Retrieve Tanzu Kubernetes Cluster kubeconfig
.
Run the following command to switch to the workload cluster:
kubectl config use-context tanzu-cli-MY-WORKLOAD-CLUSTER@MY-WORKLOAD-CLUSTER
In your subsequent logins to the Tanzu CLI, you will see an option to choose your Tanzu Kubernetes Grid environment from a list that pops up after your enter tanzu login
.
To understand how to deploy an application on your workload cluster, expose it publicly, and access it online, see Tutorial: Example Application Deployment.