After you have deployed Tanzu Kubernetes clusters, you use the tkg get cluster and tkg get credentials commands to obtain the list of running clusters and their credentials. Then, you can connect to the clusters by using kubectl and start working with your clusters.

Obtain the List of Deployed Tanzu Kubernetes Clusters

To see the list of Tanzu Kubernetes Grid management clusters and the Tanzu Kubernetes clusters that they are managing, use the tkg get command.

  1. If you have deployed more than one management cluster, run tkg get management-cluster to see the list of management clusters.

    tkg get management-cluster
    

    If you deployed two management clusters, named vsphere-mgmt-cluster and aws-mgmt-cluster, you will see the following output:

    MANAGEMENT-CLUSTER-NAME  CONTEXT-NAME                                    
    vsphere-mgmt-cluster *   vsphere-mgmt-cluster-admin@vsphere-mgmt-cluster  
    aws-mgmt-cluster         aws-mgmt-cluster-admin@aws-mgmt-cluster   
    

    The management cluster that is the current context of the Tanzu Kubernetes Grid CLI is marked with an asterisk (*).

  2. To change the context of the Tanzu Kubernetes Grid CLI to a different management cluster, run the tkg set management-cluster command.

    tkg set management-cluster aws-mgmt-cluster   
    
  3. To list all of the Tanzu Kubernetes clusters that are running in the default namespace of this management cluster, run the tkg get cluster command.

    tkg get cluster
    

    The output lists all of the Tanzu Kubernetes clusters that are managed by the management cluster. The output lists the cluster names, the namespace in which they are running, their current status, the numbers of actual and requested control plane and worker nodes, and the Kubernetes version that the cluster is running.

    NAME              NAMESPACE  STATUS   CONTROLPLANE  WORKERS  KUBERNETES       
    vsphere-cluster   default    running  1/1           1/1      v1.18.2+vmware.1 
    vsphere-cluster2  default    running  3/3           3/3      v1.18.2+vmware.1 
    

    Clusters can be in the following states:

    • creating: The control plane is being created
    • createStalled: The process of creating control plane has stalled
    • deleting: The cluster is in the process of being deleted
    • failed: The creation of the control plane has failed
    • running: The control plane has initialized fully
    • updating: The cluster is in the process of rolling out an update or is scaling nodes
    • updateFailed: The cluster update process failed
    • updateStalled: The cluster update process has stalled
    • No status: The creation of the cluster has not started yet

    If a cluster is in a stalled state, check that there is network connectivity to the external registry, make sure that there are sufficient resources on the target platform for the operation to complete, and ensure that DHCP is issuing IPv4 addresses correctly.

  4. To list only those clusters that are running in a given namespace, specify the --namespace option.

    tkg get cluster --namespace=my-namespace
    
  5. To include the management cluster in the output of tkg get cluster, specify the --include-management-cluster option.

    tkg get cluster --include-management-cluster
    

    You can see that the management cluster is running in the tkg-system namespace.

    NAME                  NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES       
    vsphere-cluster       default     running  1/1           1/1      v1.18.2+vmware.1 
    vsphere-cluster2      default     running  3/3           3/3      v1.18.2+vmware.1 
    vsphere-mgmt-cluster  tkg-system  running  1/1           1/1      v1.18.2+vmware.1 
    

Export Tanzu Kubernetes Cluster Details to a File

You can export the details of the clusters that are managed by a management cluster in either JSON or YAML format. You can save the JSON or YAML to a file so that you can use it in scripts to run bulk operations on clusters.

  1. To export cluster details as JSON, run tkg get cluster with the --output option, specifying json.

    tkg get cluster --output json
    

    The output shows the cluster information as JSON:

    [
     {
       "name": "vsphere-cluster",
       "namespace": "default",
       "status": "running",
       "plan": "",
       "controlplane": "1/1",
       "workers": "1/1",
       "kubernetes": "v1.18.2+vmware.1"
     },
     {
       "name": "vsphere-cluster2",
       "namespace": "default",
       "status": "running",
       "plan": "",
       "controlplane": "3/3",
       "workers": "3/3",
       "kubernetes": "v1.18.2+vmware.1"
     }
    ]
    
  2. To export cluster details as YAML, run tkg get cluster with the --output option, specifying yaml.

    tkg get cluster --output yaml
    

    The output shows the cluster information as YAML:

    - name: vsphere-cluster
     namespace: default
     status: running
     plan: ""
     controlplane: 1/1
     workers: 1/1
     kubernetes: v1.18.2+vmware.1
    - name: vsphere-cluster2
     namespace: default
     status: running
     plan: ""
     controlplane: 3/3
     workers: 3/3
     kubernetes: v1.18.2+vmware.1
    
  3. Save the output as a file.

    tkg get cluster --output json > clusters.json
    
    tkg get cluster --output yaml > clusters.yaml
    

Obtain Tanzu Kubernetes Cluster Credentials

After you create a Tanzu Kubernetes cluster, you obtain the kubeconfig of the deployed cluster by running the tkg get credentials command.

  1. To automatically add the credentials of a cluster to your kubeconfig file, specify the name of the cluster when you run tkg get credentials.

    tkg get credentials my-cluster
    

    You should see the following output:

    Credentials of workload cluster my-cluster have been saved
    You can now access the cluster by switching the context to my-cluster-admin@my-cluster under /root/.kube/config
    

    If the cluster is running in a namespace other than the default namespace, you must specify the --namespace option to get the credentials of that cluster.

    tkg get credentials my-cluster --namespace=my-namespace
    

    To save the credentials in a separate kubeconfig file, for example to distribute them to developers, specify the --export-file option.

    tkg get credentials my-cluster --export-file my-cluster-credentials
    

IMPORTANT: By default, unless you specify the --export-file option to save the kubeconfig for a cluster to a specific file, the credentials for all clusters that you deploy from the Tanzu Kubernetes Grid CLI are added to a shared kubeconfig file. If you delete the shared kubeconfig file, all clusters become unusable.

Examine the Deployed Cluster

  1. After you have added the credentials to your kubeconfig, you can connect to the cluster by using kubectl.

    kubectl config use-context my-cluster-admin@my-cluster
    
  2. Use kubectl to see the status of the nodes in the cluster.

    kubectl get nodes
    

    For example, if you deployed the my-prod-cluster in Deploy a Cluster with a Highly Available Control Plane with the prod plan and the default 3 control plane nodes and worker nodes, you see the following output.

    NAME                                    STATUS   ROLES    AGE     VERSION
    my-prod-cluster-gp4rl                   Ready    master   8m51s   v1.18.2+vmware.1
    my-prod-cluster-md-0-6946bcb48b-dk7m6   Ready    <none>   6m45s   v1.18.2+vmware.1
    my-prod-cluster-md-0-6946bcb48b-dq8s9   Ready    <none>   7m23s   v1.18.2+vmware.1
    my-prod-cluster-md-0-6946bcb48b-nrdlp   Ready    <none>   7m8s    v1.18.2+vmware.1
    my-prod-cluster-n8bh7                   Ready    master   5m58s   v1.18.2+vmware.1
    my-prod-cluster-xflrg                   Ready    master   3m39s   v1.18.2+vmware.1
    

    Because networking with Calico is enabled by default in Tanzu Kubernetes clusters, all clusters are in the Ready state without requiring any additional configuration.

  3. Use kubectl to see the status of the pods running in the cluster.

    kubectl get pods -A
    

    If you deployed the my-prod-cluster to vSphere, you see the following pods running in the kube-system namespace in the cluster.

    NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE
    kube-system   calico-kube-controllers-7986b8994b-kph5f        1/1     Running   0          18m
    kube-system   calico-node-96xkq                               1/1     Running   0          17m
    kube-system   calico-node-dp887                               1/1     Running   0          18m
    kube-system   calico-node-gvh5b                               1/1     Running   0          16m
    kube-system   calico-node-m6xgw                               1/1     Running   0          16m
    kube-system   calico-node-pbz5h                               1/1     Running   0          17m
    kube-system   calico-node-q6zh8                               1/1     Running   0          13m
    kube-system   coredns-5c4f46bfcb-dhm7s                        1/1     Running   0          18m
    kube-system   coredns-5c4f46bfcb-hlkks                        1/1     Running   0          18m
    kube-system   etcd-my-prod-cluster-gp4rl                      1/1     Running   0          18m
    kube-system   etcd-my-prod-cluster-n8bh7                      1/1     Running   0          15m
    kube-system   etcd-my-prod-cluster-xflrg                      1/1     Running   0          13m
    kube-system   kube-apiserver-my-prod-cluster-gp4rl            1/1     Running   0          18m
    kube-system   kube-apiserver-my-prod-cluster-n8bh7            1/1     Running   0          16m
    kube-system   kube-apiserver-my-prod-cluster-xflrg            1/1     Running   0          13m
    kube-system   kube-controller-manager-my-prod-cluster-gp4rl   1/1     Running   1          18m
    kube-system   kube-controller-manager-my-prod-cluster-n8bh7   1/1     Running   2          16m
    kube-system   kube-controller-manager-my-prod-cluster-xflrg   1/1     Running   0          13m
    kube-system   kube-proxy-68fkt                                1/1     Running   0          16m
    kube-system   kube-proxy-dc4kf                                1/1     Running   0          17m
    kube-system   kube-proxy-fnjkg                                1/1     Running   0          13m
    kube-system   kube-proxy-g2kq6                                1/1     Running   0          16m
    kube-system   kube-proxy-r48c8                                1/1     Running   0          17m
    kube-system   kube-proxy-x55vb                                1/1     Running   0          18m
    kube-system   kube-scheduler-my-prod-cluster-gp4rl            1/1     Running   2          18m
    kube-system   kube-scheduler-my-prod-cluster-n8bh7            1/1     Running   1          15m
    kube-system   kube-scheduler-my-prod-cluster-xflrg            1/1     Running   0          13m
    kube-system   vsphere-cloud-controller-manager-6x98w          1/1     Running   3          18m
    kube-system   vsphere-cloud-controller-manager-gzmmd          1/1     Running   0          15m
    kube-system   vsphere-cloud-controller-manager-rmtmq          1/1     Running   0          13m
    kube-system   vsphere-csi-controller-0                        5/5     Running   2          18m
    kube-system   vsphere-csi-node-6r64z                          3/3     Running   1          17m
    kube-system   vsphere-csi-node-bt78l                          3/3     Running   0          17m
    kube-system   vsphere-csi-node-l8t5n                          3/3     Running   0          16m
    kube-system   vsphere-csi-node-qwr4w                          3/3     Running   0          15m
    kube-system   vsphere-csi-node-rp9qd                          3/3     Running   0          16m
    kube-system   vsphere-csi-node-vjqsh                          3/3     Running   0          12m
    

    You can see from the list above that the following services are running in the cluster:

check-circle-line exclamation-circle-line close-line
Scroll to top icon