Before you can deploy the Tanzu Mission Control Self-Managed service, you must set up a Kubernetes cluster and configure it for sufficient capacity and functionality to host the service.
You can install Tanzu Mission Control (TMC) Self-Managed on the following Kubernetes cluster types:
Tanzu Kubernetes Grid (TKG) cluster versions 1.6.x, 2.1.x, and 2.2.x (with the latest supported Kubernetes versions) running in vSphere.
For information about deploying a Tanzu Kubernetes Grid cluster, see VMware Tanzu Kubernetes Grid Documentation.
Tanzu Kubernetes workload cluster running in vSphere with Tanzu on vSphere version 7.x.
For information about deploying a Tanzu Kubernetes cluster running in vSphere with Tanzu on vSphere version 7.x, see vSphere with Tanzu Configuration and Management.
Tanzu Kubernetes workload cluster running in vSphere with Tanzu on vSphere version 8.x (for TMC Self-Managed 1.0.1).
For information about deploying a Tanzu Kubernetes cluster running in vSphere with Tanzu on vSphere version 8.x, see Installing and Configuring vSphere with Tanzu.
NoteDeploying TMC Self-Managed 1.0 (prior to the 1.0.1 patch) on a Tanzu Kubernetes Grid (TKG) 2.0 workload cluster running in vSphere with Tanzu on vSphere version 8.x is for tech preview only. Initiate deployments only in pre-production environments or production environments where support for the integration is not required. vSphere 8u1 or later is required to test the tech preview integration.
The following table provides the component versions and configurations that have been tested for a TMC Self-Managed deployment.
Component | Version | Notes |
Kubernetes Cluster | 1.25.x 1.24.x 1.23.x |
Control Plane Nodes: 1 vCPUs: 4 Memory: 8GB Storage: 40GB Workers Nodes: 3 vCPUs: 4 Memory: 8GB Storage: 40GB Although we tested TMC Self-Managed on a workload cluster with a single control plane node, we recommend using three control plane nodes in production environments. Using three control plane nodes provides a highly available Kubernetes control plane. Single control plane clusters should be used for testing only. |
Kubernetes CSI Implementation | See Compatibility Matrices for vSphere Container Storage Plug-in for the supported vSphere Container Storage Plug-in for the vSphere and Kubernetes versions. | Configuration to support persistent volume creation using a default storage class for stateful services, Prometheus, and other persistence needs. |
Kubernetes CNI Implementation | N/A | Configuration to support Kubernetes overlay network. |
Load Balancer | N/A | A load balancer accepting traffic on port 443 that connects to the nodes in the cluster (for example, Avi on clusters running in vSphere with Tanzu). |
Load Balancer Service Controller | N/A | A controller that provisions a load balancer endpoint for Contour ingress. Examples include AWS Load Balancer Controller, AVI Kubernetes Operator, and Juniper Cloud-Native Contrail. |
Harbor Image Registry | 2.x | Storage quota: At least 10 GB Authenticated registries are not supported. So, a public project is required in Harbor with the installer having admin access to push images. The Tanzu Kubernetes Grid management cluster needs to be created to support private container registries. For more information, see Prepare an Internet-Restricted Environment. |
OIDC-compliant identity provider | N/A | Required to provide IDP federation to TMC Self-Managed. Examples for OIDC-compliant identity providers include Okta, Dex, and UAA with Identity Federation to Active Directory. |
cert-manager with a valid ClusterIssuer |
1.10.2 | The certificates issued by the issuer must be trusted by the browsers accessing TMC Self-Managed. Note cert-manager v1.10 is deprecated. VMware supports cert-manager v1.10 as a component of TMC Self-Managed. |
kapp-controller |
Default version in TKG or latest version (0.45.1 or later) |
kapp-controller is used to detect and install the Tanzu Mission Control Self-Managed packages. |
Before you launch the Kubernetes cluster that will host Tanzu Mission Control Self-Managed, configure certificates so that the cluster and kapp-controller
can trust Harbor.
For the cluster to trust Harbor, copy the contents of the certificate into the cluster specification as shown in the following example.
spec:
settings:
network:
trust:
additionalTrustedCAs:
- data: Base64PEM==
name: sm-harbor-name
For kapp-controller
to trust Harbor, add contents of the certificate to the ConfigMap for kapp-controller
.
apiVersion: v1
kind: ConfigMap
metadata:
name: kapp-controller-config
namespace: tkg-system
data:
caCerts: |-
-----BEGIN CERTIFICATE-----
xxx
-----END CERTIFICATE-----
httpProxy: ""
httpsProxy: ""
noProxy: ""
dangerousSkipTLSVerify: ""
Get the contents of the Harbor self-signed CA certificate.
Create a secret with the cluster name as the prefix. For example: <cluster-name>-user-trusted-ca-secret
. Following is an example of the secret.
apiVersion: v1
data:
additional-ca-1: xx.double.base64.encoded==
kind: Secret
metadata:
name: <cluster-name>-user-trusted-ca-secret
namespace: tkg-cluster-ns
type: Opaque
In the example, additional-ca-1
is the name of the data map, which is the same name as the certificate name. The value for the certificate is the double base64-encoded
value of CA certificate.
Add a trust
variable into the cluster specification.
The trust
variable name should be the same name as the data map for the secret. In this example, the name: value
is additional-ca-1
.
spec:
topology:
class: tanzukubernetescluster
...
variables:
- name: storageClass
value: tkg-storage-profile
- name: trust
value:
additionalTrustedCAs:
- name: additional-ca-1
Create a KappControllerConfig
with the cluster name as the prefix. For example, <cluster-name>-kapp-controller-package
.
Add the contents of CA certificate to <cluster-name>-kapp-controller-package
. The following is an example YAML.
apiVersion: run.tanzu.vmware.com/v1alpha3
kind: KappControllerConfig
metadata:
...
name: <cluster-name>-kapp-controller-package
namespace: testns
...
spec:
kappController:
config:
caCerts: |-
-----BEGIN CERTIFICATE-----
xxx
-----END CERTIFICATE-----
createNamespace: false
deployment:
...
globalNamespace: tkg-system
namespace: tkg-system
status:
secretRef: <cluster-name>-kapp-controller-data-values
export TKG_CUSTOM_IMAGE_REPOSITORY=custom-image-repository.harbor.io/yourproject
export TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY=false
export TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE=LS0t[...]tLS0tLQ==
For an example of a Tanzu Kubernetes Grid workload cluster YAML, see Deploy Tanzu Kubernetes Grid Workload Cluster.
After you launch the Kubernetes cluster that will host Tanzu Mission Control Self-Managed, do the following to configure the cluster:
NoteTMC Self-Managed interacts with the cluster metadata. It does not interact with cluster sensitive data. If you have a requirement to have all data at rest to be encrypted, use a storage-level solution to encrypt the data at rest.
To ensure that there is proper traffic flow for Tanzu Mission Control Self-Managed, you must define a DNS zone to store DNS records that direct traffic to the external load balancer. By directing traffic to the load balancer, you enable that traffic to be handled by ingress resources that the installer creates.
The Tanzu Mission Control Self-Managed service requires the following type A records that direct traffic to your load balancer IP address. For example, if you define the DNS zone as dnsZone: tanzu.io
, the corresponding FQDN for alertmanager
is alertmanager.tanzu.io
, and traffic for alertmanager
is directed to the IP address for the cluster’s load balancer. Create the following type A records in your DNS server and point them to a preferred IP in your load balancer’s IP pool.
<my-tmc-dns-zone>
alertmanager.<my-tmc-dns-zone>
auth.<my-tmc-dns-zone>
blob.<my-tmc-dns-zone>
console.s3.<my-tmc-dns-zone>
gts-rest.<my-tmc-dns-zone>
gts.<my-tmc-dns-zone>
landing.<my-tmc-dns-zone>
pinniped-supervisor.<my-tmc-dns-zone>
prometheus.<my-tmc-dns-zone>
s3.<my-tmc-dns-zone>
tmc-local.s3.<my-tmc-dns-zone>
Alternatively, you can also let the load balancer controller provision the IP and then configure the DNS records post installation. See Configure DNS records.
NoteMake sure that you provide a value for the
dnsZone
key in thevalues.yaml
file for the installer. You create thevalues.yaml
file as a step in the installation process. See Create a values.yaml file.
When you install Tanzu Mission Control Self-Managed, cert-manager
requests TLS certificates for the external endpoints listed in the previous section. So you must set up a cluster issuer in your cluster.
For Tanzu Mission Control Self-Managed, you can use CA as your cluster issuer type, which allows you to to bring your own self-signed certificates to be used in the deployment. Configuring CA in your cluster for cert-manager
is described at https://cert-manager.io/docs/configuration/ca/
. You can also use any of the other cluster issuers supported by cert-manager
. For more information, see the documentation for the cert-manager
project at https://cert-manager.io/docs/
.
Regardless of which cluster issuer you choose, save the root CA certificate, because it is required later in the installation process to set up trust in the Tanzu Mission Control Self-Managed components.
Tanzu Mission Control Self-Managed manages user authentication using Pinniped Supervisor as the identity broker, and requires an existing OIDC-compliant identity provider (IDP). Examples of OIDC-compliant IDPs are Okta, Dex, and UAA with Identity Federation to an Active Directory. Tanzu Mission Control Self-Managed does not store user identities directly.
The Pinniped Supervisor expects the Issuer URL, client ID, and client secret to integrate with your IDP. To enable integration with Pinniped Supervisor, do the following in your IDP. If you cannot configure your IDP, you can ask your IDP administrator to do the following.
tmc:admin
and tmc:member
in your IDP.tmc:admin
and tmc:member
have a special meaning in Tanzu Mission Control. For more information, see Access Control in VMware Tanzu Mission Control Concepts.tmc:admin
group. The user that first logs in to Tanzu Mission Control Self-Managed must be a member of the tmc:admin
group to complete the onboarding.Create an OIDC client application in the IDP with the following properties:
Sign-in method
is OIDC - OpenID Connect.Native Application
.
** Note** The resource owner password grant type is necessary only if you want to log in with the CLI using username and password. If the grant is not included, the TMC CLI will require a browser to complete the login.
values.yaml
file used during installation.https://pinniped-supervisor.<my-tmc-dns-zone>/provider/pinniped/callback
Tanzu Mission Control Self-Managed comes with a MinIO installation. However, MiniIO is used exclusively for storing audit reports and inspection scan results.
If you plan to use Data Protection, provide an S3-compatible storage solution. The S3-compatible storage solution can be either self-provisioned or cloud provided. The storage solution is used to store cluster backups. For more information, see Create a Target Location for Data Protection in Using VMware Tanzu Mission Control.
The bootstrap machine can be a local physical machine or a VM that you access via a console window or client shell. You use the bootstrap machine to install Tanzu Mission Control Self-Managed onto your prepared Kubernetes cluster. The bootstrap machine must have the following:
tar
command installedkubectl
installed and configuredFor information about required ports, see https://ports.esp.vmware.com/.