Preparing your cluster to host Tanzu Mission Control Self-Managed

Before you can deploy the Tanzu Mission Control Self-Managed service, you must set up a Kubernetes cluster and configure it for sufficient capacity and functionality to host the service.

You can install Tanzu Mission Control (TMC) Self-Managed on the following Kubernetes cluster types:

  • Tanzu Kubernetes Grid (TKG) cluster versions 2.5.x, 2.4.x, and 2.3.x (with the latest supported Kubernetes versions) running in vSphere.

    For information about deploying a Tanzu Kubernetes Grid cluster, see VMware Tanzu Kubernetes Grid Documentation.

  • Tanzu Kubernetes workload cluster running in vSphere with Tanzu on vSphere version 7.x.

    • vSphere 7.0U3l or later is required to support registration of the Supervisor Cluster to a TMC Self-Managed deployment to provide cluster lifecycle management capabilities.

    For information about deploying a Tanzu Kubernetes cluster running in vSphere with Tanzu on vSphere version 7.x, see vSphere with Tanzu Configuration and Management.

For the latest information about supported TKG versions, see the TMC Self-Managed Release Notes.

The following table provides the component versions and configurations that have been tested for a TMC Self-Managed deployment.

Component Version Notes
Kubernetes Cluster 1.28.x
1.27.x
1.26.x
Control Plane
Nodes:
   3 for high availability
   1 for testing only
vCPUs: 4
Memory: 8GB
Storage: 40GB

Workers
Nodes:
   6 for medium size stack
   3 for small size stack
vCPUs: 4
Memory: 8GB
Storage: 40GB

Although TMC Self-Managed is tested on a workload cluster with a single control plane node, you should use three control plane nodes in production environments. This configuration provides a highly available Kubernetes control plane. Single control plane clusters should be used for testing only.

Kubernetes CSI Implementation See Compatibility Matrices for vSphere Container Storage Plug-in for the supported vSphere Container Storage Plug-in for the vSphere and Kubernetes versions. Configuration to support persistent volume creation using a default storage class for stateful services, Prometheus, and other persistence needs.
Kubernetes CNI Implementation N/A Configuration to support Kubernetes overlay network.
Load Balancer N/A A load balancer accepting traffic on port 443 that connects to the nodes in the cluster (for example, Avi on clusters running in vSphere with Tanzu).
LoadBalancer Service Controller N/A A controller that provisions a load balancer endpoint for Contour ingress. Examples include AWS Load Balancer Controller, AVI Kubernetes Operator, and Juniper Cloud-Native Contrail.
Harbor Image Registry 2.x Storage quota: At least 10 GB
Authenticated registries are not supported. A public project is required in Harbor with the installer having admin access to push images. The Tanzu Kubernetes Grid management cluster must be created to support private container registries. For more information, see Prepare an Internet-Restricted Environment.
OIDC-compliant identity provider N/A Required to provide IDP federation to TMC Self-Managed. Examples of OIDC-compliant identity providers include Okta, Dex, and UAA with Identity Federation to Active Directory.
Cert-manager with a valid ClusterIssuer 1.10.2* The certificates issued by the issuer must be trusted by the browsers accessing TMC Self-Managed.
Note: cert-manager v1.10 is deprecated. VMware supports cert-manager v1.10 as a component of TMC Self-Managed.
kapp-controller Default version in TKG
or latest version (0.45.1 or later)
kapp-controller is used to detect and install the Tanzu Mission Control Self-Managed packages.

Configure certificates

Before you launch the Kubernetes cluster that will host Tanzu Mission Control Self-Managed, configure certificates so that the cluster and kapp-controller can trust Harbor.

vSphere with Tanzu on vSphere 7.x
If you are deploying Tanzu Mission Control Self-Managed on vSphere with Tanzu vSphere 7.x, do the following to configure certificates.
  1. Get the contents of the Harbor self-signed CA certificate.
  2. For the cluster to trust Harbor, copy the contents of the certificate into the cluster specification as shown in the following example.

    spec:
      settings:
        network:
          trust:
            additionalTrustedCAs:
            - data: Base64PEM==
              name: sm-harbor-name
    
  3. For kapp-controller to trust Harbor, add contents of the certificate to the ConfigMap for kapp-controller.

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kapp-controller-config
      namespace: tkg-system
    data:
      caCerts: |-
        -----BEGIN CERTIFICATE-----
        xxx
        -----END CERTIFICATE-----
      httpProxy: ""
      httpsProxy: ""
      noProxy: ""
      dangerousSkipTLSVerify: ""
    
vSphere with Tanzu on vSphere 8.x
If you are deploying Tanzu Mission Control Self-Managed on vSphere with Tanzu vSphere 8.x, do the following to configure certificates.
  1. Get the contents of the Harbor self-signed CA certificate.

  2. Create a secret with the cluster name as the prefix. For example: <cluster-name>-user-trusted-ca-secret. Following is an example of the secret.

    apiVersion: v1
    data:
      additional-ca-1: xx.double.base64.encoded==
    kind: Secret
    metadata:
      name: <cluster-name>-user-trusted-ca-secret
      namespace: tkg-cluster-ns
    type: Opaque
    

    In the example, additional-ca-1 is the name of the data map, which is the same name as the certificate name. The value for the certificate is the double base64-encoded value of CA certificate.

  3. Add a trust variable into the cluster specification.

    The trust variable name should be the same name as the data map for the secret. In this example, the name: value is additional-ca-1.

    spec:
      topology:
        class: tanzukubernetescluster
        ...
        variables:
        - name: storageClass
          value: tkg-storage-profile
        - name: trust
          value:
            additionalTrustedCAs:
            - name: additional-ca-1
    
  4. Create a KappControllerConfig with the cluster name as the prefix. For example, <cluster-name>-kapp-controller-package.

  5. Add the contents of CA certificate to <cluster-name>-kapp-controller-package. The following is an example YAML.

    apiVersion: run.tanzu.vmware.com/v1alpha3
    kind: KappControllerConfig
    metadata:
      ...
      name: <cluster-name>-kapp-controller-package
      namespace: testns
      ...
    spec:
      kappController:
        config:
          caCerts: |-
            -----BEGIN CERTIFICATE-----
            xxx
            -----END CERTIFICATE-----
        createNamespace: false
        deployment:
          ...
        globalNamespace: tkg-system
      namespace: tkg-system
    status:
      secretRef: <cluster-name>-kapp-controller-data-values
    
Tanzu Kubernetes Grid on vSphere
If you are deploying Tanzu Mission Control Self-Managed on Tanzu Kubernetes Grid on vSphere, set the following variables for Harbor in the workload cluster YAML .
export TKG_CUSTOM_IMAGE_REPOSITORY=custom-image-repository.harbor.io/yourproject
export TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY=false
export TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE=LS0t[...]tLS0tLQ==

For an example of a Tanzu Kubernetes Grid workload cluster YAML, see Deploy Tanzu Kubernetes Grid Workload Cluster.


Configuring your cluster

After you launch the Kubernetes cluster that will host Tanzu Mission Control Self-Managed, do the following to configure the cluster:

  • Configure a DNS zone
  • Set up TLS certificates using cluster issuer or certificate import
  • Set up authentication
  • Provide S3-compatible object storage
Note

TMC Self-Managed interacts with the cluster metadata. It does not interact with cluster sensitive data. If you have a requirement to have all data at rest to be encrypted, use a storage-level solution to encrypt the data at rest.

Configure a DNS zone

To ensure that there is proper traffic flow for Tanzu Mission Control Self-Managed, you must define a DNS zone to store DNS records that direct traffic to the external load balancer. By directing traffic to the load balancer, you enable that traffic to be handled by ingress resources that the installer creates.

The Tanzu Mission Control Self-Managed service requires the following type A records that direct traffic to your load balancer IP address. For example, if you define the DNS zone as dnsZone: tanzu.io, the corresponding FQDN for alertmanager is alertmanager.tanzu.io, and traffic for alertmanager is directed to the IP address for the cluster’s load balancer. Create the following type A records in your DNS server and point them to a preferred IP in your load balancer’s IP pool.

  • <my-tmc-dns-zone>
  • alertmanager.<my-tmc-dns-zone>
  • auth.<my-tmc-dns-zone>
  • blob.<my-tmc-dns-zone>
  • console.s3.<my-tmc-dns-zone>
  • gts-rest.<my-tmc-dns-zone>
  • gts.<my-tmc-dns-zone>
  • landing.<my-tmc-dns-zone>
  • pinniped-supervisor.<my-tmc-dns-zone>
  • prometheus.<my-tmc-dns-zone>
  • s3.<my-tmc-dns-zone>
  • tmc-local.s3.<my-tmc-dns-zone>

Alternatively, you can also let the load balancer controller provision the IP and then configure the DNS records post installation. See Configure DNS records.

Note

Make sure that you provide a value for the dnsZone key in the values.yaml file for the installer. You create the values.yaml file as a step in the installation process. See Create a values.yaml file.

Set up TLS

You can set up TLS certificates using a cert-manager cluster issuer or importing certificates.

Using cluster issuer
When you install Tanzu Mission Control Self-Managed, cert-manager requests TLS certificates for the external endpoints listed in Configure a DNS zone. Therefore, you must set up a cluster issuer in your cluster.

For Tanzu Mission Control Self-Managed, you can use CA as your cluster issuer type. Using CA as your cluster issuer type allows you to use your own self-signed certificates in the deployment. Configuring CA in your cluster for cert-manager is described at https://cert-manager.io/docs/configuration/ca/. You can also use any of the other cluster issuers supported by cert-manager. For more information, see the documentation for the cert-manager project at https://cert-manager.io/docs/.

Regardless of which cluster issuer you choose, save the root CA certificate, because it is required later in the installation process to set up trust in the Tanzu Mission Control Self-Managed components.

Importing certificates
If you are importing certificates, make sure that you have the certificate and key material corresponding to the following Kubernetes secrets.

Secret Name Certificate’s Common Name DNS Name(s)
server-tls auth.<my-tmc-dns-zone> auth.<my-tmc-dns-zone>
landing-service-tls landing.<my-tmc-dns-zone> landing.<my-tmc-dns-zone>
minio-tls s3.<my-tmc-dns-zone> s3.<my-tmc-dns-zone>
tmc-local.s3.<my-tmc-dns-zone>
console.s3.<my-tmc-dns-zone>
pinniped-supervisor-server-tls pinniped-supervisor.<my-tmc-dns-zone> pinniped-supervisor.<my-tmc-dns-zone>
stack-tls <my-tmc-dns-zone> <my-tmc-dns-zone>
tenancy-service-tls gts.<my-tmc-dns-zone> gts.<my-tmc-dns-zone>
gts-rest.<my-tmc-dns-zone>

Do not use a wildcard certificate for your secrets (for example, *.<my_tmc_dns_zone>). This can cause the error Fatal error loading application configuration when attempting to acces the TMC console, and shows an HTTP 404 error in your browser, because the ingress redirects the URL incorrectly.

If this happens, regenerate the secrets with dedicated certificate for each individual DNS, update the secrets, and then restart the corresponding service deployments and stateful sets. For more information about these secrets, see the mapping section in Certificate rotation in Tanzu Mission Control Self-Managed.

Set up authentication

Tanzu Mission Control Self-Managed manages user authentication using Pinniped Supervisor as the identity broker, and requires an existing OIDC-compliant identity provider (IDP). Examples of OIDC-compliant IDPs are Okta, Dex, and UAA with Identity Federation to an Active Directory. Tanzu Mission Control Self-Managed does not store user identities directly.

The Pinniped Supervisor expects the Issuer URL, client ID, and client secret to integrate with your IDP. To enable integration with Pinniped Supervisor, do the following in your IDP. If you cannot configure your IDP, you can ask your IDP administrator to do the following.

  1. Create the groups tmc:admin and tmc:member in your IDP.
    The group names tmc:admin and tmc:member have a special meaning in Tanzu Mission Control. For more information, see Access Control in VMware Tanzu Mission Control Concepts.
  2. Add yourself, or the first user of the service, to the tmc:admin group. The user that first logs in to Tanzu Mission Control Self-Managed must be a member of the tmc:admin group to complete the onboarding.
  3. Create an OIDC client application in the IDP with the following properties:

    • Sign-in method is OIDC - OpenID Connect.
    • Supports the resource-owner password flow where users can exchange their username and password for OIDC tokens. For example, in Okta, you will need to create a Native Application.
      Note

      The resource owner password grant type is necessary only if you want to log in with the CLI using username and password. If the grant is not included, the TMC CLI will require a browser to complete the login.

    • Uses client secret as client authentication method. This method generates a client secret for the app and the client secret is used in the values.yaml file used during installation.
    • Requires PKCE as additional verification.
    • Has the following grant types:
      • authorization code
      • refresh token
      • resource owner password (optional as noted above)
    • Uses sign-in redirect URI:
      https://pinniped-supervisor.<my-tmc-dns-zone>/provider/pinniped/callback
    • Limits access to the application by selecting appropriate entities (users or groups) in your IDP.

Provide S3-compatible object storage

Tanzu Mission Control Self-Managed comes with a MinIO installation. However, MiniIO is used exclusively for storing audit reports and inspection scan results.

If you plan to use data protection, provide an S3-compatible storage solution. The S3-compatible storage solution can be either self-provisioned or cloud provided. The storage solution is used to store cluster backups. For more information, see Create a Target Location for Data Protection in Using VMware Tanzu Mission Control.

Bootstrap machine

The bootstrap machine can be a local physical machine or a VM that you access via a console window or client shell. You use the bootstrap machine to install Tanzu Mission Control Self-Managed onto your prepared Kubernetes cluster. The bootstrap machine must have the following:

  • A Linux x86_64 build
  • Access to your Kubernetes cluster
  • glibc-based Linux distribution
    • Supported distributions include: Ubuntu, Fedora, CentOS, Debian 
    • Alpine is not supported
  • tar command installed
  • kubectl installed and configured

Network ports

For information about required ports, see https://ports.esp.vmware.com/.

check-circle-line exclamation-circle-line close-line
Scroll to top icon