By default, when you deploy a management cluster by running tanzu mc create
, Tanzu Kubernetes Grid creates a temporary kind
cluster on your local bootstrap machine. It then uses the local cluster to provision the final management cluster on its target cloud infrastructure—vSphere, Amazon Web Services (AWS), or Azure—and deletes the temporary cluster after the management cluster successfully deploys. Running tanzu mc delete
to delete a management cluster invokes a similar process of creating, using, and then removing a temporary, local kind
cluster.
In some circumstances, you might want to keep the local cluster after deploying or deleting a management cluster. For example, to examine the objects in the cluster or review its logs. To do this, you can deploy or delete the management cluster with the CLI options --use-existing-bootstrap-cluster
or --use-existing-cleanup-cluster
.
With these options, Tanzu Kubernetes Grid skips creating and deleting the local kind
cluster, and instead uses a pre-existing local cluster that you already have or create new for the purpose.
CautionUsing an existing bootstrap cluster is an advanced use case that is for experienced Kubernetes users. If possible, it is strongly recommended to use the default
kind
cluster that Tanzu Kubernetes Grid provides to bootstrap your management clusters.
To retain your local kind
cluster during management cluster creation or deletion, you must first have a compatible cluster running on your bootstrap machine. Ensure this by either identifying or creating the cluster, as described in the subsections below.
To use an existing local cluster, you must make sure that both of the following are true:
The cluster has never previously been used to bootstrap or delete a management cluster.
The cluster was created with kind
v0.11 or later.
Check this by running docker ps
and associating the kindest:node
version listed with versions listed in the kind
release notes.
Background: If your bootstrap machine uses a Linux kernel built after the May 2021 Linux security patch, for example Linux 5.11 and 5.12 with Fedora, your bootstrap cluster must be created by a kind
version v0.11 or later. Earlier versions of kind
attempt to change a file made read-only in recent Linux versions, causing failure. The security patch is being backported to all LTS kernels from 4.9 onwards, causing possible management cluster deployment failures as operating system updates are shipped, including for Docker Machine on Mac OS and Windows Subsystem for Linux.
If both of these qualifications are true for the current local cluster, you may use it to create or delete a management cluster. Otherwise, you must replace the cluster as follows:
Delete the cluster.
Download and install a new version of kind
as described in the kind documentation.
Create a kind
cluster as described in the Create a New Cluster section below.
To create a new local bootstrap cluster, do one of the following, depending on your bootstrap machine’s connectivity:
Fully-Online Environment:
Create the cluster:
kind create cluster
Internet-Restricted Environment:
Create a kind
cluster configuration file kind.yml
as follows:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: tkg-kind
nodes:
- role: control-plane
# This option mounts the host docker registry folder into
# the control-plane node, allowing containerd to access them.
extraMounts:
- containerPath: CONTAINER-CA-PATH
hostPath: HOST-CA-PATH
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.configs."REGISTRY-FQDN".tls]
ca_file = "/etc/containerd/REGISTRY-FQDN/CA.crt"
Where:
CONTAINER-CA-PATH
is the path of the Harbor CA certificate on the kind
container, like /etc/containerd/REGISTRY-FQDN
HOST-CA-PATH
is the path to the Harbor CA certificate on the bootstrap VM where the kind
cluster is created, like /etc/docker/certs.d/REGISTRY-FQDN
REGISTRY-FQDN
is the name of the Harbor registryCA.crt
is the CA certificate of the Harbor registryBy default, crashd
looks for a cluster named tkg-kind
, so naming the kind
cluster tkg-kind
makes it easy to collect logs if the cluster fails to bootstrap.
Use the above config file to create the kind
cluster.
kind create cluster --config kind.yml
To deploy a Tanzu Kubernetes Grid management cluster with an existing bootstrap cluster:
Follow the above procedure to Identify or Create a Local Bootstrap Cluster.
Set the context of kubectl
to the local bootstrap cluster:
kubectl config use-context my-bootstrap-cluster-admin@my-bootstrap-cluster
Deploy the management cluster by running tanzu mc create
command with the --use-existing-bootstrap-cluster
option:
tanzu mc create --file mc.yaml --use-existing-bootstrap-cluster my-bootstrap-cluster
See Deploy Management Clusters for more information about running tanzu mc create
.
ImportantIf you create a new
kind
cluster to use as a bootstrap cluster andAVI_CONTROL_PLANE_HA_PROVIDER
is set to true in the cluster configuration, you must delete thekind
cluster after you have deployed the management cluster. Because the management cluster and thekind
bootstrap cluster share some network configuration, retaining the bootstrap cluster after deployment might cause issues when creating and managing workload clusters with that management cluster. For information about how to delete akind
cluster, see Clean Up After an Unsuccessful Management Cluster Deployment in Troubleshooting Management Cluster Issues.
To delete a Tanzu Kubernetes Grid management cluster with an existing bootstrap cluster:
Follow the above procedure to Identify or Create a Local Bootstrap Cluster.
Set the context of kubectl
to the local bootstrap cluster:
kubectl config use-context my-bootstrap-cluster-admin@my-bootstrap-cluster
Delete the management cluster by running tanzu mc delete
command with the --use-existing-cleanup-cluster
option:
tanzu mc delete --use-existing-cleanup-cluster my-bootstrap-cluster
See Delete Management Clusters for more information about running tanzu mc delete
.