check-circle-line exclamation-circle-line close-line

<

During the deployment of the management cluster, either from the installer interface or the CLI, Tanzu Kubernetes Grid creates a temporary management cluster using a Kubernetes in Docker (kind) cluster on the bootstrap environment. After creating the temporary management cluster locally, Tanzu Kubernetes Grid uses it to provision the final management cluster in the platform of your choice, depending on whether you are deploying to vSphere or Amazon EC2. After the deployment of the management cluster finishes successfully, Tanzu Kubernetes Grid deletes the temporary kind cluster.

Tanzu Kubernetes Grid saves the configuration of your management cluster in the ~/.tkg/config.yaml file. Tanzu Kubernetes Grid also creates a folder named ~/.tkg/providers, that contains all of the files required by Cluster API to create the management cluster.

IMPORTANT: By default, unless you specify the --kubeconfig option to save the kubeconfig for a cluster to a specific file, all clusters that you deploy from the Tanzu Kubernetes Grid CLI are added to a shared .kube-tkg/config file. If you delete the shared .kube-tkg/config file, all management clusters become orphaned and thus unusable.

Management Cluster Networking

When you deploy a Tanzu Kubernetes Grid management cluster, pod-to-pod networking with Calico is automatically enabled in the management cluster.

If you deployed the management cluster to vSphere, you can set static IP addresses on the management cluster node VMs by making the DHCP reservations static for the HA proxy load balancer VM and the control plane VM or VMs.

Verify the Deployment of the Management Cluster

After the deployment of the management cluster completes successfully, control plane VM and one or more worker node VMs are present in your vSphere inventory or Amazon EC2 instances. You can obtain information about your management cluster by running the tkg get management-cluster command, and by locating the created artifacts in either vSphere or your Amazon EC2 dashboard.

Unless you manually change the context of the Tanzu Kubernetes Grid CLI to another cluster by using the tkg set management-cluster command, by default the CLI uses the context of the most recently deployed management cluster.

  1. To facilitate the deployment of similar management clusters in the future, make a copy of the ~/.tkg/config.yaml file.
  2. View the management cluster objects in either vSphere or Amazon EC2.

    • If you deployed the management cluster to vSphere, go to the resource pool that you designated when you deployed the management cluster.
    • If you deployed the management cluster to Amazon EC2, go to the Instances view of your EC2 dashboard.

    You should see the following VMs or instances. If you did not specify a name for the management cluster, cluster_name is something similar to tkg-mgmt-vsphere-20200323121503 or tkg-mgmt-aws-20200323140554.

    • vSphere with a development control plane:
      • A control plane VM with a name similar to cluster_name-control-plane-sx5rp
      • A worker node VM with a name similar to cluster_name-md-0-6b8db6b59d-kbnk4
      • A load balancer VM with the name cluster_name-tkg-system-lb
    • vSphere with a production control plane:
      • Three control plane VMs with names similar to cluster_name-control-plane-9tzxl
      • A worker node VM with a name similar to cluster_name-md-0-787f688d8b-djhsz
      • A load balancer VM with the name cluster_name-tkg-system-lb
    • Amazon EC2 with a development control plane:
      • A control plane instance with a name similar to cluster_name-control-plane-bcpfp
      • A worker node instance with a name similar to cluster_name-md-0-dwfnm
      • An EC2 bastion host instance with the name cluster_name-bastion
    • Amazon EC2 with a production control plane:
      • Three control plane instances with names similar to cluster_name-control-plane-xfbt9
      • A worker node instance with a name similar to cluster_name-md-0-b8dch
      • An EC2 bastion host instance with the name cluster_name-bastion

Connect kubectl to List Management Clusters

Tanzu Kubernetes Grid CLI provides commands that facilitate many of the operations that you can perform with your management cluster. However, for certain operations, you still need to use kubectl. By default, Tanzu Kubernetes Grid sets the kubectl context to a new management cluster when you deploy it.

  1. On the bootstrap environment machine on which you ran tkg init, run the tkg get management-cluster command to see the context of the management cluster that you have deployed.

    tkg get management-cluster
    

    If you deployed a management cluster named my-management-cluster-admin@my-management-cluster, you will see the following output:

    MANAGEMENT-CLUSTER-NAME  CONTEXT-NAME                                    
    my-management-cluster *  my-management-cluster-admin@my-management-cluster
    

    The asterisk (*) identifies this management cluster as being the current context of the Tanzu Kubernetes Grid CLI.

  2. To instruct kubectl to use the context of the management cluster, so that you can examine its resources, run kubectl config use-context.

    kubectl config use-context my-management-cluster-admin@my-management-cluster 

  3. Use kubectl commands to examine the resources of the management cluster.

    For example, run kubectl get nodes, kubectl get pods, or kubectl get namespaces to see the nodes, pods, and namespaces running in the management cluster.

What to Do Next

You can now use Tanzu Kubernetes Grid to start deploying Tanzu Kubernetes clusters. For information, see Deploying Tanzu Kubernetes Clusters and Managing their Lifecycle.