This topic explains how to manage multiple management clusters from the same bootstrap machine.
Important
- If you are already using TKG with a standalone management cluster and you do not require any of the functionality listed in in When to Use a Standalone Management Cluster in About TKG, see Reference Design for Migration from TKGm to TKGs (vSphere with Tanzu) for information about how to migrate from a standalone management cluster to the vSphere Iaas control plane (formerly vSphere with Tanzu) Supervisor.
- From v2.5.x onwards, Tanzu Kubernetes Grid does not support the management of standalone TKG management clusters on AWS and Azure. For more information, see End of Support for TKG Management and Workload Clusters on AWS and Azure in the VMware Tanzu Kubernetes Grid v2.5.x Release Notes.
To list available management clusters and see which one you are currently logged in to, run tanzu context use
on your bootstrap machine:
tanzu context use
NoteTo login to a context, you must create the context by using the
tanzu context create
command.
For example, if you have two management clusters, my-mgmt-cluster-1
and my-mgmt-cluster-2
, you are currently logged in to my-mgmt-cluster-1
:
$ tanzu context use
? Select a server [Use arrows to move, type to filter]
> my-mgmt-cluster-1 ()
my-mgmt-cluster-2 ()
+ new server
kubectl
, and kubeconfig
Tanzu Kubernetes Grid does not automatically change the kubectl
context when you run tanzu context use
to change the Tanzu CLI context. Also, Tanzu Kubernetes Grid does not set the kubectl
context to a workload cluster when you create it. To change the kubectl
context, use the kubectl config use-context
command.
By default, Tanzu Kubernetes Grid saves cluster context information in the following files on your bootstrap machine:
~/.kube-tkg/config
: Management cluster contexts used by the tanzu management-cluster
plugin~/.kube/config
: Management cluster and workload cluster contexts used by kubectl
To see the details of a management cluster:
Run tanzu context use
to log in to the management cluster, as described in List Management Clusters and Change Context.
tanzu context use
Run tanzu mc get
.
tanzu mc get
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN TKR
mc-test-cli tkg-system running 3/3 3/3 v1.28.11+vmware.1 management prod v1.28.11---vmware.1-tkg.1
Details:
NAME READY SEVERITY REASON SINCE MESSAGE
/mc-test-cli True 2d1h
├─ClusterInfrastructure - VSphereCluster/mc-test-cli-jjtpf True 2d1h
├─ControlPlane - KubeadmControlPlane/mc-test-cli-mffw9 True 2d1h
│ ├─Machine/mc-test-cli-mffw9-5zcbj True 2d1h
│ ├─Machine/mc-test-cli-mffw9-fs6zh True 2d1h
│ └─Machine/mc-test-cli-mffw9-jlwnm True 2d1h
└─Workers
├─MachineDeployment/mc-test-cli-md-0-tnz59 True 15h
│ └─Machine/mc-test-cli-md-0-tnz59-64bdc75d94-gtg54 True 2d1h
├─MachineDeployment/mc-test-cli-md-1-2d26b True 15h
│ └─Machine/mc-test-cli-md-1-2d26b-776885b84-6hzkj True 2d1h
└─MachineDeployment/mc-test-cli-md-2-fs824 True 15h
└─Machine/mc-test-cli-md-2-fs824-7bfd7b9c7b-c7n95 True 2d1h
Providers:
NAMESPACE NAME TYPE PROVIDERNAME VERSION WATCHNAMESPACE
caip-in-cluster-system infrastructure-ipam-in-cluster InfrastructureProvider ipam-in-cluster v0.1.0
capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v1.2.8
capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v1.2.8
capi-system cluster-api CoreProvider cluster-api v1.2.8
capv-system infrastructure-vsphere InfrastructureProvider vsphere v1.5.2
To see more options, run tanzu mc get --help
. The Tanzu CLI alias mc
is short for management-cluster
.
The Tanzu CLI allows you to log in to a management cluster that someone else created. To log in, you can use the local kubeconfig details or the server endpoint option.
To log into an existing management cluster by using a local kubeconfig:
Run tanzu context use
, use your down-arrow key to highlight + new server, and press Enter.
tanzu context use
? Select a server + new server
When prompted, select Local kubeconfig as your login type and enter the path to your local kubeconfig file, context, and the name of your server. For example:
tanzu context use
? Select a server + new server
? Select login type Local kubeconfig
? Enter path to kubeconfig (if any) /Users/exampleuser/examples/kubeconfig
? Enter kube context to use new-mgmt-cluster-admin@new-mgmt-cluster
? Give the server a name new-mgmt-cluster
✔ successfully logged in to management cluster using the kubeconfig new-mgmt-cluster
To log into an existing management cluster using the Server endpoint option:
Run tanzu context use
, use your down-arrow key to highlight + new server, and press Enter.
tanzu context use
? Select a server + new server
When prompted, select Server endpoint as your login type.
successfully logged in to management cluster by using the kubeconfig <server name>
It is possible that you might add a management cluster that someone else created to your instance of the Tanzu CLI, that at some point you no longer require. Similarly, if you deployed a management cluster and that management cluster has been deleted from your target platform by means other than by running tanzu mc delete
, that management cluster will continue to appear in the list of management clusters that the CLI tracks when you run tanzu context use
. In these cases, you can remove the management cluster from the list of management clusters that the Tanzu CLI tracks.
Run tanzu context list
, to see the list of management clusters that the Tanzu CLI tracks.
tanzu context list
You should see all of the management clusters that you have either deployed yourself or added to the Tanzu CLI, the location of their kubeconfig files, and their contexts.
Run the tanzu context delete
command to remove a management cluster.
tanzu context delete my-vsphere-mc
Running the tanzu context delete
command removes the cluster details from the ~/.config/tanzu/config.yaml
and ~/.kube-tkg/config.yaml
files. It does not delete the management cluster itself, if it still exists. To delete a management cluster rather than just remove it from the Tanzu CLI configuration, see Delete Management Clusters.
On the bootstrap machine, the Tanzu CLI uses a certificate that is stored locally to authenticate with the management cluster. If the certificate expires, you will see failed error messages when running tanzu
CLI commands.
Therefore, when the certificate nears expiration, follow these steps to update the certificate:
Get the name of the management cluster with tanzu mc get
.
tanzu mc get
Get the cluster configuration data:
kubectl -n tkg-system get secrets CLUSTER-NAME-kubeconfig -o 'go-template={{ index .data "value"}}' | base64 -d > mc_kubeconfig.yaml
Where CLUSTER-NAME
is the name of the management cluster. For example:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBD<redacted>
server: https://192.168.100.90:6443
name: tkg-mgmt
contexts:
- context:
cluster: tkg-mgmt
user: tkg-mgmt-admin
name: tkg-mgmt-admin@tkg-mgmt
current-context: tkg-mgmt-admin@tkg-mgmt
kind: Config
preferences: {}
users:
- name: tkg-mgmt-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZ<redacted>
client-key-data: LS0tLS1CRUdJTiBSU<redacted>`
Delete the existing management cluster entry from the list of management clusters that the Tanzu CLI is currently tracking:
tanzu context delete CLUSTER-NAME
Use the tanzu context create
command to add a new management cluster entry with the updated kubeconfig
:
tanzu context create --kubeconfig mc_kubeconfig.yaml --name CLUSTER-NAME --context CLUSTER-NAME-admin@CLUSTER-NAME
After you deploy a management cluster, you can scale it up or down by increasing or reducing the number of node VMs that it contains. To scale a management cluster, use the tanzu cluster scale
command with one or both of the following options:
--controlplane-machine-count
changes the number of management cluster control plane nodes.--worker-machine-count
changes the number of management cluster worker nodes.Because management clusters run in the tkg-system
namespace rather than the default
namespace, you must also specify the --namespace
option when you scale a management cluster.
tanzu context use
before you run tanzu cluster scale
to make sure that the management cluster to scale is the current context of the Tanzu CLI.To scale a production management cluster that you originally deployed with 3 control plane nodes and 5 worker nodes to 5 and 10 nodes respectively, run the following command:
tanzu cluster scale MANAGEMENT-CLUSTER-NAME --controlplane-machine-count 5 --worker-machine-count 10 --namespace tkg-system
If you initially deployed a development management cluster with one control plane node and you scale it up to 3 control plane nodes, Tanzu Kubernetes Grid automatically enables stacked HA on the control plane.
ImportantDo not change context or edit the
.kube-tkg/config
file while Tanzu Kubernetes Grid operations are running.
To update the vSphere credentials used by a management cluster, and optionally all of the workload clusters that it manages, see Update Cluster Credentials in Creating and Managing TKG 2.5 Workload Clusters on vSphere with the Tanzu CLI.
When you deploy a management cluster by using either the installer interface or the CLI, participation in the VMware Customer Experience Improvement Program (CEIP) is enabled by default, unless you specify the option to opt out. If you remain opted in to the program, the management cluster sends information about how you use Tanzu Kubernetes Grid back to VMware at regular intervals, so that we can make improvements in future versions.
For more information about the CEIP, see Manage Participation in CEIP.
If you opted out of the CEIP when you deployed a management cluster and want to opt in, or if you opted in and want to opt out, see Opt In or Out of the VMware CEIP in Manage Participation in CEIP to change your CEIP participation setting after deployment
To help you to organize and manage your development projects, you can optionally divide the management cluster into Kubernetes namespaces. You can then use Tanzu CLI to deploy workload clusters to specific namespaces in your management cluster. For example, you might want to create different types of clusters in dedicated namespaces. If you do not create additional namespaces, Tanzu Kubernetes Grid creates all workload clusters in the default
namespace. For information about Kubernetes namespaces, see the Kubernetes documentation.
Make sure that kubectl
is connected to the correct management cluster context by displaying the current context.
kubectl config current-context
List the namespaces that are currently present in the management cluster.
kubectl get namespaces
You will see that the management cluster already includes several namespaces for the different services that it provides:
capi-kubeadm-bootstrap-system Active 4m7s
capi-kubeadm-control-plane-system Active 4m5s
capi-system Active 4m11s
capi-webhook-system Active 4m13s
capv-system Active 3m59s
cert-manager Active 6m56s
default Active 7m11s
kube-node-lease Active 7m12s
kube-public Active 7m12s
kube-system Active 7m12s
tkg-system Active 3m57s
Use kubectl create -f
to create new namespaces, for example for development and production.
These examples use the production
and development
namespaces from the Kubernetes documentation.
kubectl create -f https://k8s.io/examples/admin/namespace-dev.json
kubectl create -f https://k8s.io/examples/admin/namespace-prod.json
Run kubectl get namespaces --show-labels
to see the new namespaces.
development Active 22m name=development
production Active 22m name=production
Before deleting a namespace for workload clusters in a management cluster, you need to delete the workload clusters themselves. You cannot delete the workload clusters by deleting their management cluster namespace.
To delete a namespace for workload clusters in a management cluster:
Set the context of kubectl
to your management cluster:
kubectl config use-context MY-MGMT-CLUSTER@MY-MGMT-CLUSTER
Where MY-MGMT-CLUSTER
is the name of your management cluster.
List the clusters running in the namespace that you are deleting:
tanzu cluster list -n NAMESPACE
Follow the procedure Delete Workload Clusters to delete volumes and services, migrate workloads if needed, and delete the clusters in the namespace.
Use kubectl
to delete the namespace:
kubectl delete namespace NAMESPACE
To delete a management cluster, run the tanzu mc delete
command.
When you run tanzu mc delete
, Tanzu Kubernetes Grid creates a temporary kind
cleanup cluster on your bootstrap machine to manage the deletion process. The kind
cluster is removed when the deletion process completes.
To see all your management clusters, run tanzu context use
as described in List Management Clusters and Change Context.
If there are management clusters that you no longer require, run tanzu mc delete
.
You must be logged in to the management cluster that you want to delete.
tanzu mc delete my-mgmt-cluster
To skip the yes/no
verification step when you run tanzu mc delete
, specify the --yes
option.
tanzu mc delete my-mgmt-cluster --yes
If there are workload clusters running in the management cluster, the delete operation is not performed.
In this case, you can delete the management cluster in two ways:
tanzu cluster delete
to delete all of the running clusters and then run tanzu mc delete
again.tanzu mc delete
with the --force
option.tanzu mc delete my-mgmt-cluster --force
ImportantDo not change context or edit the
.kube-tkg/config
file while Tanzu Kubernetes Grid operations are running.
You can use Tanzu Kubernetes Grid to start deploying workload clusters to different Tanzu Kubernetes Grid instances. For information, see Creating Workload Clusters in Creating and Managing TKG 2.5 Workload Clusters on vSphere with the Tanzu CLI.