VMware Tanzu Mission Control allows you to have complete control over the entire lifecycle of provisioned Tanzu Kubernetes clusters, from create to delete and everything in-between.
When you create a cluster in Tanzu Mission Control, you have control over its entire lifecycle. In addition to scaling node pools up and down, creating and deleting namespaces, and other capabilities that are available in attached clusters, you can also create and delete clusters as necessary.
What is a Tanzu Kubernetes Cluster?
A Tanzu Kubernetes cluster is an opinionated installation of Kubernetes open-source software that is built and supported by VMware. It is part of a Tanzu Kubernetes Grid instance that includes the following components:
- management cluster - a Kubernetes cluster that performs the role of the primary management and operational center for the Tanzu Kubernetes Grid instance
- provisioner - a namespace on the management cluster that contains one or more workload clusters
- workload cluster - a Tanzu Kubernetes cluster that runs your application workloads
- Tanzu Kubernetes Grid
- Tanzu Kubernetes Grid Service
- Tanzu Community Edition
To manage the lifecycle of your Tanzu Kubernetes clusters in Tanzu Mission Control, you must register the management cluster. After you register the management cluster, you can identify the existing workload clusters in that Tanzu Kubernetes Grid instance that you want to manage through Tanzu Mission Control. You can also create new workload clusters in a registered management cluster.
For information about the minimum requirements for registering a management cluster, see Requirements for Registering a Tanzu Kubernetes Cluster with Tanzu Mission Control.
In vSphere with Tanzu, the functionality of the management cluster is provided through the vSphere Supervisor Cluster, and a provisioner is called a vSphere namespace. For more information about Tanzu Kubernetes Grid Service in vSphere with Tanzu, see vSphere with Tanzu Configuration and Management.
For more information about Tanzu Kubernetes Grid and Tanzu Kubernetes clusters, see VMware Tanzu Kubernetes Grid.
For more information about Tanzu Community Edition managed clusters, see Getting Started with Managed Clusters.
What Happens When You Create a Cluster using Tanzu Mission Control
- provisions the necessary resources in your specified cloud account
- creates a Tanzu Kubernetes cluster according to your specifications
- attaches the cluster to your organization
Resource Usage in Your Cloud Provider Account
For each cluster you create, Tanzu Mission Control provisions a set of resources in your connected cloud provider account.
- 3 VMs
The VMs include a control plane node, a worker node (to run the cluster agent extensions), and a bastion host. If you specify additional VMs in your node pool, those are provisioned as well.
- 4 security groups (one for the load balancer and one for each of the initial VMs)
- 1 private subnet and 1 public subnet in the specified availability zone
- 1 public and 1 private route table in the specified availability zone
- 1 classic load balancer
- 1 internet gateway
- 1 NAT gateway in the specified availability zone
- 1 VPC elastic IP
- 2 additional control plane VMs
- 2 additional private and public subnets
- 2 additional private and public route tables
- 2 additional NAT gateways
- 2 additional VPC elastic IPs
Your cloud provider implements a set of default limits or quotas on these types of resources, and allows you to modify the limits. Typically the default limits are sufficient to get started creating clusters from Tanzu Mission Control. However, as you increase the number of clusters you are running or the workloads on your clusters, you will encroach on these limits. When you reach the limits imposed by your cloud provider, any attempts to provision that type of resource fail. As a result, Tanzu Mission Control will be unable to create a new cluster, or you might be unable to create additional deployments on your existing clusters.
For example, if your quota on internet gateways is set to 5 and you already have five in use, then Tanzu Mission Control is unable to provision the necessary resources when you attempt to create a new cluster.
Therefore regularly assess the limits you have specified in your cloud provider account, and adjust them as necessary to fit your business needs.
ConfigMap Usage for Custom Configurations
Tanzu Mission Control includes a ConfigMap that you use to customize certain elements of a cluster.
There may be not enough default pod resources to create more than five clusters concurrently when creating the clusters using Tanzu Mission Control. Use the ConfigMap to specify the parameter values for creating the lcm-tkg-extension pods. You can specify the CPU, memory, and concurrency resources for lcm-tkg-extension pods.
The default lcm-tkg-extension-concurrency is 5, the default lcm-tkg-extension-memory-request is 64Mi, and the default lcm-tkg-extension-memory-limit is 512Mi.
Edit the ConfigMap to specify the parameter values as shown in the example below.
apiVersion: v1 kind: ConfigMap metadata: name: tmc-lcm-config namespace: vmware-system-tmc data: lcm-tkg-extension-concurrency: "6" lcm-tkg-extension-memory-request: 128Mi lcm-tkg-extension-memory-limit: 1Gi
You can also enable FIPS using the ConfigMap, as well as private registries. For more information, refer to the Tanzu Kubernetes Grid documentation.
Audit Logging in Your Provisioned Cluster
When you provision a new Tanzu Kubernetes cluster through Tanzu Mission Control, preconfigured audit logging is enabled for the cluster. For more information, see Events and Audit Logs.