You can use Tanzu Kubernetes Grid to deploy multiple management clusters from the same bootstrap machine, to vSphere, Azure, and Amazon EC2.

You can manage all of the management clusters from the same Tanzu Kubernetes Grid CLI instance, and switch between the contexts of the different management clusters by using the tkg set management-cluster command.

You can use the tkg add management-cluster command to add management clusters that someone else created from a different machine, so that you can manage those clusters from your Tanzu Kubernetes Grid CLI instance.

Prerequisites

You have deployed at least two management clusters, to any or all of vSphere, Amazon EC2, or Azure.

List Management Clusters and Change Context

  1. On the bootstrap machine on which you ran tkg init, run the tkg get management-cluster command to see the list of management clusters that you have deployed.

    tkg get management-cluster
    

    If you deployed two management clusters, named my-vsphere-mgmt-cluster and my-aws-mgmt-cluster, you will see the following output:

    MANAGEMENT-CLUSTER-NAME     CONTEXT-NAME                                              STATUS                                    
    my-vsphere-mgmt-cluster *   my-vsphere-mgmt-cluster-admin@my-vsphere-mgmt-cluster     Success
    my-aws-mgmt-cluster         my-aws-mgmt-cluster-admin@my-aws-mgmt-cluster             Success
    

    The management cluster that is the current context of the Tanzu Kubernetes Grid CLI is marked with an asterisk (*).

  2. To show the details of a specific management cluster, run tkg get management-cluster with the --name option.

    tkg get management-cluster --name my-aws-mgmt-cluster
    
    MANAGEMENT-CLUSTER-NAME     CONTEXT-NAME                                              STATUS                                    
    my-aws-mgmt-cluster         my-aws-mgmt-cluster-admin@my-aws-mgmt-cluster             Success
    
  3. To change the context of the Tanzu Kubernetes Grid CLI to a different management cluster, run the tkg set management-cluster command.

    tkg set management-cluster my-aws-mgmt-cluster
    

Management Clusters, kubectl, and kubeconfig

The Tanzu Kubernetes Grid CLI and kubectl contexts are automatically set to a management cluster when you deploy it. However, Tanzu Kubernetes Grid does not automatically set the kubectl context to a management cluster when you run tkg set management-cluster to change the tkg CLI context. Also, Tanzu Kubernetes Grid does not set the kubectl context to a Tanzu Kubernetes cluster when you create it. To set the kubectl context after using tkg set management-cluster or creating Tanzu Kubernetes clusters, you must use the kubectl config use-context command.

By default, all context information about your management clusters is stored in a file named .kube-tkg/config on the machine on which you run the Tanzu Kubernetes Grid CLI. Only information about management clusters is stored in .kube-tkg/config. By default, information about Tanzu Kubernetes clusters is stored in .kube/config.

Management Clusters and config.yaml

When you run tkg get management-cluster to initialize the tkg CLI, it creates a folder $HOME/.tkg and a default cluster configuration file config.yaml in that folder. The tkg init command then uses and modifies settings in $HOME/.tkg/config.yaml unless you use the --config option to specify a configuration file in a different location or with a different name.

Use the --config option with tkg init if:

  • You want to use multiple management clusters, for example to deploy Tanzu Kubernetes clusters in different cloud infrastructures or with different configurations.
  • For any other reason you want to avoid overwriting the configuration values currently in $HOME/.tkg/config.yaml.

The --config option overrides the $HOME/.tkg/config.yaml default only, and does not change where the tkg CLI references other files under $HOME/.tkg.

Add Existing Management Clusters to Your Tanzu Kubernetes Grid CLI Instance

To add a management cluster that someone else created to your list of managed management clusters, use the tkg add management-cluster command.

  1. Obtain the .kube-tkg/config of the management cluster that you want to add, and save it on your bootstrap machine.

    IMPORTANT: If you are likely to share a management cluster with other users, it is strongly recommended to run tkg init with the --kubeconfig option when you deploy that management cluster. This saves the kubeconfig for the management cluster in a separate file, rather than in the default .kube-tkg/config file. This allows you to share individual management clusters more easily.

  2. Set an environment variable to point to the kubeconfig of the new management cluster.

    export KUBECONFIG=~/<path>/mgmt-cluster.kubeconfig
    
  3. Set the context of kubectl to the new management cluster.

    kubectl config use-context mgmt-cluster-admin@mgmt-cluster
    
  4. Add the new cluster to your Tanzu Kubernetes Grid instance.

    tkg add management-cluster
    

    The tkg add management-cluster automatically adds the current kubeconfig to your .kube-tkg/config file. Alternatively, you can specify tkg add management-cluster with the --kubeconfig option, to add a kubeconfig that is not the current one to your .kube-tkg/config file.

    tkg add management-cluster --kubeconfig <path>/mgmt-cluster.kubeconfig
    
  5. Run tkg get management-cluster to see the newly added cluster in your list of management clusters.
  6. Set the context of the Tanzu Kubernetes Grid CLI to the new cluster.

    tkg set management-cluster new-cluster
    

Export Management Cluster Details to a File

You can export the details of your management clusters in either JSON or YAML format. You can save the JSON or YAML to a file so that you can use it in scripts to run bulk operations on management clusters.

  1. To export cluster details as JSON, run tkg get management-cluster with the --output option, specifying json.

    tkg get management-cluster --output json
    

    The output shows the management cluster information as JSON:

    [
        {
          "name": "my-aws-mgmt-cluster",
          "context": "my-aws-mgmt-cluster-admin@my-aws-mgmt-cluster",
          "file": "/root/.kube/config",
          "isCurrentContext": false
        },
        {
          "name": "my-vsphere-mgmt-cluster",
          "context": "my-vsphere-mgmt-cluster-admin@my-vsphere-mgmt-cluster",
          "file": "/root/.kube-tkg/config",
          "isCurrentContext": true
        }
    ]
    
  2. To export management cluster details as YAML, run tkg get management-cluster with the --output option, specifying yaml.

    tkg get management-cluster --output yaml
    

    The output shows the management cluster information as YAML:

    - name: my-aws-mgmt-cluster
      context: my-aws-mgmt-cluster-admin@my-aws-mgmt-cluster
      file: /root/.kube/config
      isCurrentContext: false
    - name: my-vsphere-mgmt-cluster
      context: my-vsphere-mgmt-cluster-admin@my-vsphere-mgmt-cluster
      file: /root/.kube-tkg/config
      isCurrentContext: true
    
  3. Save the output as a file.

    tkg get management-cluster --output json > clusters.json
    
    tkg get management-cluster --output yaml > clusters.yaml
    

Scale Management Clusters

After you deploy a management cluster, you can scale it up or down by increasing or reducing the number of node VMs that it contains. To scale a management cluster, use the tkg scale cluster command. You change the number of management cluster control plane nodes by specifying the --controlplane-machine-count option. You change the number of management cluster worker nodes by specifying the --worker-machine-count option. Because management clusters run in the tkg-system namespace rather than the default namespace, you must also specify the --namespace option when you scale a management cluster.

  1. Run tkg set management-cluster before you run tkg scale cluster, to make sure that the management cluster to scale is the current context of the Tanzu Kubernetes Grid CLI.
  2. To scale a production management cluster that you originally deployed with 3 control plane nodes and 5 worker nodes to 5 and 10 nodes respectively, run the following command:

    tkg scale cluster management_cluster_name --controlplane-machine-count 5 --worker-machine-count 10 --namespace tkg-system

If you initially deployed a development management cluster with one control plane node and you scale it up to 3 control plane nodes, Tanzu Kubernetes Grid automatically enables stacked HA on the control plane.

IMPORTANT: Do not change context or edit the .kube-tkg/config file while Tanzu Kubernetes Grid operations are running.

Opt in or Out of the VMware CEIP

When you deploy a management cluster by using either the installer interface or the CLI, participation in the VMware Customer Experience Improvement Program (CEIP) is enabled by default, unless you specify the option to opt out. If you remain opted in to the program, the management cluster sends information about how you use Tanzu Kubernetes Grid back to VMware at regular intervals, so that we can make improvements in future versions. Management clusters send the following information to VMware:

  • The number of Tanzu Kubernetes clusters that you deploy.
  • The infrastructure, network, and storage providers that you use.
  • The time that it takes for Tanzu Kubernetes Grid to perform basic operations, such as create cluster, delete cluster, scale cluster, and upgrade cluster.
  • The Tanzu Kubernetes Grid extensions that you implement.
  • The plans that you use to deploy clusters, as well as the number and configuration of the control plane and worker nodes.
  • The versions of Tanzu Kubernetes Grid and Kubernetes that you use.
  • The type and size of the workloads that your clusters run, as well as their lifespan.
  • Whether or not you integrate Tanzu Kubernetes Grid with Tanzu Kubernetes Grid Service for vSphere, Tanzu Mission Control, or Tanzu Observability by Wavefront.
  • The nature of any problems, errors, and failures that you encounter when using Tanzu Kubernetes Grid, so that we can identify which areas of Tanzu Kubernetes Grid need to be made more robust.

If you opted out of the CEIP when you deployed a management cluster and want to opt in, or if you opted in and want to opt out, you can change your CEIP participation setting after deployment.

CEIP runs as a cronjob on the management cluster. It does not run on workload clusters.

  1. Run the tkg get management-cluster command to see the list of management clusters that you have deployed.

    tkg get management-cluster
    

    If you deployed two management clusters, named my-vsphere-mgmt-cluster and my-aws-mgmt-cluster, you will see the following output:

    MANAGEMENT-CLUSTER-NAME     CONTEXT-NAME                                              STATUS
    my-vsphere-mgmt-cluster *   my-vsphere-mgmt-cluster-admin@my-vsphere-mgmt-cluster     Success
    my-aws-mgmt-cluster         my-aws-mgmt-cluster-admin@my-aws-mgmt-cluster             Success
    
  2. Run the tkg set management-cluster command to set the context of the tkg CLI to the management cluster for which you want to check the CEIP status.

    tkg set management-cluster my-aws-mgmt-cluster
    
  3. Run the tkg get ceip-participation command to see the CEIP status of the current management cluster.

    tkg get ceip-participation
    

    The status Opt-in means that CEIP participation is enabled on a management cluster. Opt-out means that CEIP participation is disabled.

    MANAGEMENT-CLUSTER-NAME        CEIP-STATUS
    my-aws-mgmt-cluster            Opt-out
    
  4. To enable CEIP participation on a management cluster on which it is currently disabled, run the tkg set ceip-participation command with the value true.

    tkg set ceip-participation true    
    
  5. To verify that the CEIP participation is now active, run tkg get ceip-participation again.

    The status should now be Opt-in.

    MANAGEMENT-CLUSTER-NAME        CEIP-STATUS
    my-aws-mgmt-cluster            Opt-in
    

    You can also check that the CEIP cronjob is running by setting the kubectl context to the management cluster and running kubectl get cronjobs -A.

    kubectl config use-context my-aws-mgmt-cluster-admin@my-aws-mgmt-cluster
    
    kubectl get cronjobs -A
    

    The output shows that the tkg-telemetry job is running:

    NAMESPACE              NAME            SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
    tkg-system-telemetry   tkg-telemetry   0 */6 * * *   False     0        <none>          18s
    
  6. To disable CEIP participation on a management cluster on which it is currently enabled, run the tkg set ceip-participation command with the value false.

    tkg set ceip-participation false    
    
  7. To verify that the CEIP participation is disabled, run tkg get ceip-participation again.

    The status should now be Opt-out.

    MANAGEMENT-CLUSTER-NAME        CEIP-STATUS
    my-aws-mgmt-cluster            Opt-out
    

    If you run kubectl get cronjobs -A again, the output shows that no job is running:

    No resources found
    

Delete Management Clusters

To delete a management cluster, run the tkg delete management-cluster command.

When you run tkg delete management-cluster, Tanzu Kubernetes Grid creates a temporary kind cleanup cluster on your bootstrap machine to manage the deletion process. The kind cluster is removed when the deletion process completes.

  1. To list all of the management clusters are running, run the tkg get management-cluster command.

    tkg get management-cluster
    

    The management cluster context that is the current context of the Tanzu Kubernetes Grid CLI and kubectl is marked with an asterisk (*).

    MANAGEMENT-CLUSTER-NAME     CONTEXT-NAME                                              STATUS                                   
    my-vsphere-mgmt-cluster *   my-vsphere-mgmt-cluster-admin@my-vsphere-mgmt-cluster     Success
    my-aws-mgmt-cluster         my-aws-mgmt-cluster-admin@my-aws-mgmt-cluster             Success
    
  2. If there are management clusters that you no longer require, run tkg delete management-cluster.

    You must specify the name of the management cluster to delete.

    tkg delete management-cluster my-aws-mgmt-cluster
    

    To skip the yes/no verification step when you run tkg delete management-cluster, specify the --yes option.

    tkg delete management-cluster my-aws-mgmt-cluster --yes
    
  3. If there are Tanzu Kubernetes clusters running in the management cluster, the delete operation is not performed.

    In this case, you can delete the management cluster in two ways:

    • Run tkg delete cluster to delete all of the running clusters and then run tkg delete management-cluster again.
    • Run tkg delete management-cluster with the --force option.
    tkg delete management-cluster my-aws-mgmt-cluster --force
    

IMPORTANT: Do not change context or edit the .kube-tkg/config file while Tanzu Kubernetes Grid operations are running.

What to Do Next

You can use Tanzu Kubernetes Grid to start deploying Tanzu Kubernetes clusters to different Tanzu Kubernetes Grid instances. For information, see Deploying Tanzu Kubernetes Clusters and Managing their Lifecycle.

If you have vSphere 7, you can also deploy and manage Tanzu Kubernetes clusters in vSphere with Tanzu. For information, see Use the Tanzu Kubernetes Grid CLI with a vSphere with Tanzu Supervisor Cluster .

check-circle-line exclamation-circle-line close-line
Scroll to top icon