Installing Tanzu Mission Control Self-Managed

After you have prepared your cluster to run Tanzu Mission Control Self-Managed, as described in Preparing Your Cluster to Host Tanzu Mission Control Self-Managed, you can proceed with the installation.

Note

If you are running a Beta deployment, you cannot upgrade the Beta deployment to GA. The GA installation must be a fresh installation.

Workflow for installing Tanzu Mission Control Self-Managed

Download and stage the installation images

Download, extract, and stage the installer for Tanzu Mission Control Self-Managed.

  1. Download the installer to the bootstrap computer.

    You can download the installer from Broadcom Support.

  2. Create a directory on the bootstrap computer and extract the tarball into the directory. For example:

    mkdir tanzumc
    tar -xf tmc-self-managed-1.4.0.tar -C ./tanzumc  
    
  3. Stage the installation images in your container registry.

    1. Create a public project in your container registry with at least 10 GB of storage quota. If possible, set the quota to unlimited for ease of testing.

      For example: my-harbor-instance/harbor-project

    2. Add the root CA cert for Harbor to the /etc/ssl/certs path of the jumpbox for system-wide use. This enables the image push to the Harbor repository.  
    3. Review the required arguments.
      tanzumc/tmc-sm push-images harbor --help
      
    4. Push the images to the registry.

      tanzumc/tmc-sm push-images harbor --project {{harbor-instance}}/{{harbor-project}} --username {{username}} --password {{password}}  
      

      After the command runs successfully, you will see the pushed-package-repository.json file created in the tanzumc directory with the following contents:

      {"repositoryImage":"{{harbor-instance}}/{{harbor-project}}/package-repository","version":"1.4.0"}
      

      You will need the repositoryImage and version from the JSON file to stage the TMC Self-Managed Tanzu packages on your cluster.

Stage the TMC Self-Managed package on your cluster

Create the tmc-local namespace and add the Tanzu package repository to the workload cluster on which you want to install Tanzu Mission Control Self-Managed.

  1. If your cluster was created using vSphere with Tanzu on v7.x, do the steps in Prepare a Workload Cluster Created by Using vSphere with Tanzu to Run Packages.

    If your cluster was deployed on a Supervisor on vSphere 8.x using TKG or by a standalone TKG management cluster, you can skip this step, because kapp-controller is installed by default in the cluster.

  2. Create the tmc-local namespace. All the artifacts for the Tanzu Mission Control Self-Managed service will be installed to this namespace.

    kubectl create namespace tmc-local
    
  3. (Optional) If you are running Kubernetes 1.26 or later, create a privileged pod security standard to allow TMC pods to start.
    kubectl label ns tmc-local pod-security.kubernetes.io/enforce=privileged
    
  4. (Optional) If you are importing your own certificates, create the required Kubernetes secrets for each of the secrets listed in Importing certificates in Set up TLS.

    kubectl create secret tls <SECRET_NAME> --key=”KEY-FILE-NAME.key” --cert=”CERT-FILE-NAME.crt” -n tmc-local
    

    Where:

    • KEY-FILE-NAME is the name of the key file that your certificate issuer gave you corresponding to the appropriate secret
    • CERT-FILE-NAME is the name of the crt file that your certificate issuer gave you corresponding to the appropriate secret
  5. Add the Tanzu package repository to your cluster in the tmc-local namespace.

    tanzu package repository add tanzu-mission-control-packages --url "{{repositoryImage}}:{{version}}" --namespace tmc-local
    

    Use the exact repositoryImage and version from the output after you pushed the images to the registry in Download and stage the installation images.

  6. Wait for the kapp-controller to reconcile the Tanzu packages in the repository.

    You can check the reconciliation status using the following command.

    tanzu package repository list --namespace tmc-local
    

Create a values.yaml file

Create a values.yaml file that contains the key-values for your configuration. For the complete list of key-values you can use, see Configuration key values for Tanzu Mission Control Self-Managed.

Note

In the password fields, enclose strings containing special characters, such as @ and #, in quotes. Otherwise, the special characters can cause the installation to fail. For example, userPassword: "ad#min".

  1. Inspect the values schema.

    tanzu package available get "tmc.tanzu.vmware.com/{{version}}" --namespace tmc-local --values-schema
    

    Use the same version used in the previous section.

  2. Enter values for each of the keys based on your configuration and save your changes.

    Note

    This step might require you to look-up some of the configuration from Preparing your cluster for Tanzu Mission Control Self-Managed

The following is a sample YAML that uses a preferred load balancer IP with Avi Kubernetes Operator and Okta as the OIDC IdP. If you are authenticating using Active Directory (AD) or OpenLDAP, see the examples in Authentication with AD or OpenLDAP.

harborProject: harbor.tanzu.io/tmc
dnsZone: tmc.tanzu.io
clusterIssuer: local-issuer
postgres:
  userPassword: <postgres-admin-password>
  maxConnections: 300
minio:
  username: root
  password: <minio-admin-password>
contourEnvoy:
  serviceType: LoadBalancer
  serviceAnnotations: # needed only when specifying load balancer controller specific config like preferred IP
    ako.vmware.com/load-balancer-ip: "10.20.10.100"
  # when using an auto-assigned IP instead of a preferred IP, please use the following key instead of the serviceAnnotations above
  # loadBalancerClass: local
oidc:
  issuerType: pinniped
  issuerURL: https://dev.okta.com/oauth2/default
  clientID: <okta-client-id>
  clientSecret: <okta-client-secret>
trustedCAs:
  local-ca.pem: | # root CA cert of the cluster issuer in cert-manager, if not a well-known CA
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
  harbor-ca.pem: | # root CA cert of Harbor, if not a well-known CA and if different from the local-ca.pem
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
  idp-ca.pem: | # root CA cert of the IDP, if not a well-known CA and if different from the local-ca.pem
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
alertmanager: # needed only if you want to turn on alerting
  criticalAlertReceiver:
    slack_configs:
    - send_resolved: false
      api_url: https://hooks.slack.com/services/...
      channel: '#<slack-channel-name>'
telemetry:
  ceipOptIn: true
  eanNumber: <vmware-ean> # if EAN is available
  ceipAgreement: true
size: small

Authentication with AD or OpenLDAP

If you are using your organization’s Active Directory (AD) or OpenLDAP credentials, add the key-value authenticationType:ldap and replace the OIDC IdP configuration with configuration for your AD or OpenLDAP.

Important

Make sure that the mail attribute is specified in your directory. The value for the mail attribute must identify unique end users. When you authenticate with AD or OpenLDAP, TMC Self-Managed uses the mail attribute as the primary identity claim.

The following example uses OpenLDAP.

authenticationType: ldap
oidc:
  issuerType: pinniped
  issuerURL: https://pinniped-supervisor.tmc.tanzu.io/provider/pinniped
ldap:
  type: "ldap"
  host: "ldap.openldap.svc.cluster.local"
  username: "cn=pinniped guest,dc=pinniped,dc=dev"
  password: "somevalue123"
  domainName: "in-cluster-openldap"
  userBaseDN: "ou=users,dc=pinniped,dc=dev"
  userSearchFilter: "(&(objectClass=person)(sAMAccountName={}))"
  userSearchAttributeUsername: sAMAccountName
  groupBaseDN: "ou=users,dc=pinniped,dc=dev"
  groupSearchFilter: "(&(objectClass=group)(member={}))"
  rootCA: |
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----

The following example uses AD.

Note: The values for admin and member (under idpGroupRoles) are case-sensitive, and must be the common name (CN attribute) of the respective AD group.

authenticationType: ldap
oidc:
  issuerType: pinniped
  issuerURL: https://pinniped-supervisor.tmc.tanzu.io/provider/pinniped
idpGroupRoles:
  admin: tmc-admin
  member: tmc-member
ldap:
  type: activedirectory
  host: "dc01.tanzu.io"
  username: "CN=Pinniped SvcAcct,OU=tmcsm,DC=tanzu,DC=io"
  password: "somevalue123!"
  domainName: "acme-active-directory"
  userBaseDN: "DC=tanzu,DC=io"
  userSearchFilter: "(&(objectClass=person)(sAMAccountName={}))"
  userSearchAttributeUsername: sAMAccountName
  groupBaseDN: "DC=tanzu,DC=io"
  groupSearchFilter: "(&(objectClass=group)(member={}))"
  rootCA: |-
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----

To troubleshoot AD and OpenLDAP authentication issues, see Troubleshooting AD and OpenLDAP authentication.

Importing a TLS certificate

If you are importing your own TLS certificates, add the following to the values.yaml file:

  • certificateImport: true
  • The CA pem that issued the certificates to the trustedCAs map.

Specifying a clusterIssuer is optional. The following example shows the required values.

certificateImport: true
trustedCAs:
  my-ca.pem: |-
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
# Additional configuration below this line as needed

Configure a default duration/renewBefore for certificates managed by cert-manager (optional)

Starting with version 1.4 of TMC Self-Managed, you can optionally configure a default duration and renewBefore for certificates. Certificates can be automatically reloaded by pods after renewal before expiration. The default values are 1 year for duration and 30 days for renewBefore if not otherwise configured. The TLS secret is automatically renewed by cert-manager before it expires.

The following example shows the default values in the configuration file.

certManager:
  certificate:
    duration: 8760h   # The requested 'duration' (i.e. lifetime) of the Certificate
    renewBefore: 720h # How long before the currently issued certificate's expiry cert-manager should renew

Configure a default repository version and registry path for your Tanzu Standard package repository (optional)

Starting with version 1.2 of TMC Self-Managed, you can optionally configure a default repository version and registry path for the Tanzu Standard package repository. Prior to version 1.2, TMC Self-Managed would configure the Tanzu Standard package repository version on managed TKG clusters, which could cause issues, such overwrite problems on Tanzu Kubernetes clusters with a pre-configured Tanzu Standard package repository version.

About your repository and registry

  • The desired version of the Tanzu Standard package repository must be hosted in a registry that is accessible from your TMC Self-Managed environment.
  • The registry can either be unauthenticated or authenticated. If you use a Tanzu Standard package repository from your own authenticated registry, you can add registry credentials at the cluster group level from TMC and export it to all namespaces. This ensures access to packages in attached clusters from the authenticated registry.
  • Downgrade to an older version of the Tanzu Standard package repository is not supported.
  • You can upgrade only to the latest version of the Tanzu Standard package repository.
  • If you don’t want to configure different versions of the Tanzu Standard package repository, the installer configures a default version of the repository with latest versions of available packages.

To configure a default Tanzu Standard package repository:

  1. Before you install or upgrade your TMC Self-Managed deployment, add the following section to your values.yaml file to specify the host, path, and name of the Tanzu Standard package repository you want to set as the default for this deployment.

    tanzuStandard:  
     imageRegistry: <registry-hostname-with-ports-if-any>
     relativePath: <relative/path-to/tz-std-repo>:<standard-repo-version>
    
    • Make sure imageRegistry value specifies the port where the registry is listening. For example, the registry installed with TMC Self-Managed would look like this:
    registry.tanzu.io:8443
    
    • Make sure the relativePath value has a colon (:) separating the path and the version.
    • The values of imageRegistry and relativePath must form an accessible URL to the registry.

After you have updated the values.yaml file, you can proceed with the install or upgrade.

Deploy the Tanzu Mission Control Self-Managed stack to your cluster

After downloading and staging the installation images and creating the values.yaml file, launch the installer using the tanzu CLI.

Use the following command to initiate the installation:

tanzu package install tanzu-mission-control -p "tmc.tanzu.vmware.com" --version "{{version}}" --values-file "{{/path-to/my-values.yaml}}" --namespace tmc-local

Where:

  • version is the version of the Tanzu package repository with the Tanzu Mission Control Self-Managed images from step 3.4 under Download and stage the installation images.
  • /path-to/my-values.yaml is the path to the values.yaml file that you created in the previous section.

Configure DNS records

If you had already configured the DNS type A records for Tanzu Mission Control Self-Managed as part of Configure a DNS Zone, you can skip this step.

If you have not configured the DNS type A records, you need to configure them depending on your DNS server. The list of type A records to configure can be found under Configure a DNS Zone. The load balancer IP for Tanzu Mission Control Self-Managed can be found by running the following command:

kubectl get svc contour-envoy -n tmc-local -oyaml | yq -e '.status.loadBalancer.ingress[0].ip'
Note

If the DNS records are not configured, some services might not start up successfully and will be in CrashLoopBackOff state.

Log in to the Tanzu Mission Control console

After the installation completes, you can open the Tanzu Mission Control console in a browser.

Important

The first user to log in to your Tanzu Mission Control Self-Managed deployment must belong to the tmc:admin group.

  1. Open a browser and go to the URL of your Tanzu Mission Control Self-Managed deployment. The URL contains the DNS zone that you defined when you prepared the cluster for deployment, something like this:

    https://tmc.<my-dns-zone>
    

    For example, if you named the DNS zone tanzu.io, then the URL for the Tanzu Mission Control console looks like this:

    https://tmc.tanzu.io
    

    The start page of the Tanzu Mission Control console prompts you to sign in.

  2. Click Sign In. When you click Sign In, you are redirected to your upstream IDP to log in. 

  3. Log in with your IDP credentials.

Note

When you log out of the TMC Self-Managed console, only the TMC Self-Managed cookies are cleared. The upstream IDP cookies are not cleared.

Download and install the Tanzu CLI and Tanzu Mission Control plug-ins

The Tanzu Mission Control plug-ins for the Tanzu CLI allow you to create and manage Kubernetes clusters and cluster groups, namespaces and workspaces, data protection, and policies. For more information, see About VMware Tanzu CLI - Tanzu Mission Control Plug-ins.

Note

Tanzu CLI 1.0 and later versions are supported.

Instruction for installing the Tanzu CLI and the Tanzu Mission Control plug-ins are provided in Install and Configure the Tanzu CLI and Tanzu Mission Control Plug-ins. However, there are some slight variations for a Tanzu Mission Control Self-Managed deployment. Therefore, use the following sequence of steps.

  1. Review the prerequisites.
  2. Download the CLI binary.
  3. Install the core tanzu CLI.
  4. Initialize and verify your tanzu CLI installation.
  5. Stage the Tanzu Mission Control plug-ins in your local repository.

    To stage the plug-ins, log in to the local repository and run the following commands:

    tanzu plugin upload-bundle --tar tmc/tanzu-cli-plugins/tmc.tar.gz --to-repo "local-repo-name"
    tanzu plugin source update default -u "local-repo-name"/plugin-inventory:latest
    
  6. Log in and create a context.
    Note

    You do not need an API token to create a context in a TMC Self-Managed deployment.

  7. Verify the Tanzu Mission Control plugins.

Dynamically update the stack size of your TMC Self-Managed deployment

You can update configuration values for your TMC Self-Managed deployment without doing a complete reinstall. This procedure shows how to change the stack size of your TMC Self-Managed deployment. However, it is also applicable to the other configuration values.

If you want to change the size of the TMC stack, first make sure your cluster has the minimum configuration, as described in Preparing your cluster to host Tanzu Mission Control Self-Managed.

To update configuration values dynamically:

  1. Open your values.yaml and change the value of the size key to medium. For example,

    size: medium
    
  2. Save the file.

  3. Retrieve the package version of your TMC Self-Managed deployment.

    tanzu package installed get tanzu-mission-control | grep PACKAGE-VERSION 
    
  4. Use the following command to update the deployment with the new configuration values.

    tanzu package installed update tanzu-mission-control -p tmc.tanzu.vmware.com --version <package-version>  --values-file /home/kubo/values.yaml --namespace tmc-local
    

    Make sure you replace <package-version> and /home/kubo/values.yaml with the appropriate values, before running the command.

  5. After the update completes, verify that all deployments are updated.
    For example, the number of pods for api-gateway-server is 2 with the small stack size, and 3 with the medium stack size.

Uninstall Tanzu Mission Control Self-Managed

To remove Tanzu Mission Control Self-Managed and its artifacts from you cluster, use the tanzu cli.

  1. Back up any data that you do not want to lose.

  2. Run the following commands:

    tanzu package installed delete tanzu-mission-control --namespace tmc-local
    tanzu package repository delete tanzu-mission-control-packages --namespace tmc-local
    
  3. If necessary, delete residual resources.

    The above commands clean up most of the resources that were created by the tanzu-mission-control Tanzu package. However, there are some resources that you have to remove manually. The resources include:

    • persistent volumes
    • internal TLS certificates
    • configmaps

    Alternatively, you can delete the tmc-local namespace. When you delete the tmc-local namespace, the persistent volume claims associated with the namespace are deleted. Make sure you have already backed up any data that you don’t want to lose.

check-circle-line exclamation-circle-line close-line
Scroll to top icon