After you have prepared your cluster to run Tanzu Mission Control Self-Managed, as described in Preparing Your Cluster to Host Tanzu Mission Control Self-Managed, you can proceed with the installation.
NoteIf you are running a Beta deployment, you cannot upgrade the Beta deployment to GA. The GA installation must be a fresh installation.
Download, extract, and stage the installer for Tanzu Mission Control Self-Managed.
Download the installer to the bootstrap computer.
You can download the installer from Broadcom Support.
Create a directory on the bootstrap computer and extract the tarball into the directory. For example:
mkdir tanzumc
tar -xf tmc-self-managed-1.4.0.tar -C ./tanzumc
Stage the installation images in your container registry.
Create a public project in your container registry with at least 10 GB of storage quota. If possible, set the quota to unlimited for ease of testing.
For example: my-harbor-instance/harbor-project
/etc/ssl/certs
path of the jumpbox for system-wide use. This enables the image push to the Harbor repository. tanzumc/tmc-sm push-images harbor --help
Push the images to the registry.
tanzumc/tmc-sm push-images harbor --project {{harbor-instance}}/{{harbor-project}} --username {{username}} --password {{password}}
After the command runs successfully, you will see the pushed-package-repository.json
file created in the tanzumc
directory with the following contents:
{"repositoryImage":"{{harbor-instance}}/{{harbor-project}}/package-repository","version":"1.4.0"}
You will need the repositoryImage
and version
from the JSON file to stage the TMC Self-Managed Tanzu packages on your cluster.
Create the tmc-local
namespace and add the Tanzu package repository to the workload cluster on which you want to install Tanzu Mission Control Self-Managed.
If your cluster was created using vSphere with Tanzu on v7.x, do the steps in Prepare a Workload Cluster Created by Using vSphere with Tanzu to Run Packages.
If your cluster was deployed on a Supervisor on vSphere 8.x using TKG or by a standalone TKG management cluster, you can skip this step, because kapp-controller
is installed by default in the cluster.
Create the tmc-local
namespace. All the artifacts for the Tanzu Mission Control Self-Managed service will be installed to this namespace.
kubectl create namespace tmc-local
kubectl label ns tmc-local pod-security.kubernetes.io/enforce=privileged
(Optional) If you are importing your own certificates, create the required Kubernetes secrets for each of the secrets listed in Importing certificates in Set up TLS.
kubectl create secret tls <SECRET_NAME> --key=”KEY-FILE-NAME.key” --cert=”CERT-FILE-NAME.crt” -n tmc-local
Where:
KEY-FILE-NAME
is the name of the key
file that your certificate issuer gave you corresponding to the appropriate secretCERT-FILE-NAME
is the name of the crt
file that your certificate issuer gave you corresponding to the appropriate secretAdd the Tanzu package repository to your cluster in the tmc-local
namespace.
tanzu package repository add tanzu-mission-control-packages --url "{{repositoryImage}}:{{version}}" --namespace tmc-local
Use the exact repositoryImage
and version
from the output after you pushed the images to the registry in Download and stage the installation images.
Wait for the kapp-controller
to reconcile the Tanzu packages in the repository.
You can check the reconciliation status using the following command.
tanzu package repository list --namespace tmc-local
Create a values.yaml
file that contains the key-values for your configuration. For the complete list of key-values you can use, see Configuration key values for Tanzu Mission Control Self-Managed.
NoteIn the password fields, enclose strings containing special characters, such as
@
and#
, in quotes. Otherwise, the special characters can cause the installation to fail. For example,userPassword: "ad#min"
.
Inspect the values schema.
tanzu package available get "tmc.tanzu.vmware.com/{{version}}" --namespace tmc-local --values-schema
Use the same version
used in the previous section.
Enter values for each of the keys based on your configuration and save your changes.
NoteThis step might require you to look-up some of the configuration from Preparing your cluster for Tanzu Mission Control Self-Managed
The following is a sample YAML that uses a preferred load balancer IP with Avi Kubernetes Operator and Okta as the OIDC IdP. If you are authenticating using Active Directory (AD) or OpenLDAP, see the examples in Authentication with AD or OpenLDAP.
harborProject: harbor.tanzu.io/tmc
dnsZone: tmc.tanzu.io
clusterIssuer: local-issuer
postgres:
userPassword: <postgres-admin-password>
maxConnections: 300
minio:
username: root
password: <minio-admin-password>
contourEnvoy:
serviceType: LoadBalancer
serviceAnnotations: # needed only when specifying load balancer controller specific config like preferred IP
ako.vmware.com/load-balancer-ip: "10.20.10.100"
# when using an auto-assigned IP instead of a preferred IP, please use the following key instead of the serviceAnnotations above
# loadBalancerClass: local
oidc:
issuerType: pinniped
issuerURL: https://dev.okta.com/oauth2/default
clientID: <okta-client-id>
clientSecret: <okta-client-secret>
trustedCAs:
local-ca.pem: | # root CA cert of the cluster issuer in cert-manager, if not a well-known CA
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
harbor-ca.pem: | # root CA cert of Harbor, if not a well-known CA and if different from the local-ca.pem
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
idp-ca.pem: | # root CA cert of the IDP, if not a well-known CA and if different from the local-ca.pem
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
alertmanager: # needed only if you want to turn on alerting
criticalAlertReceiver:
slack_configs:
- send_resolved: false
api_url: https://hooks.slack.com/services/...
channel: '#<slack-channel-name>'
telemetry:
ceipOptIn: true
eanNumber: <vmware-ean> # if EAN is available
ceipAgreement: true
size: small
If you are using your organization’s Active Directory (AD) or OpenLDAP credentials, add the key-value authenticationType:ldap
and replace the OIDC IdP configuration with configuration for your AD or OpenLDAP.
ImportantMake sure that the
The following example uses OpenLDAP.
authenticationType: ldap
oidc:
issuerType: pinniped
issuerURL: https://pinniped-supervisor.tmc.tanzu.io/provider/pinniped
ldap:
type: "ldap"
host: "ldap.openldap.svc.cluster.local"
username: "cn=pinniped guest,dc=pinniped,dc=dev"
password: "somevalue123"
domainName: "in-cluster-openldap"
userBaseDN: "ou=users,dc=pinniped,dc=dev"
userSearchFilter: "(&(objectClass=person)(sAMAccountName={}))"
userSearchAttributeUsername: sAMAccountName
groupBaseDN: "ou=users,dc=pinniped,dc=dev"
groupSearchFilter: "(&(objectClass=group)(member={}))"
rootCA: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
The following example uses AD.
Note: The values for
admin
andmember
(underidpGroupRoles
) are case-sensitive, and must be the common name (CN attribute) of the respective AD group.
authenticationType: ldap
oidc:
issuerType: pinniped
issuerURL: https://pinniped-supervisor.tmc.tanzu.io/provider/pinniped
idpGroupRoles:
admin: tmc-admin
member: tmc-member
ldap:
type: activedirectory
host: "dc01.tanzu.io"
username: "CN=Pinniped SvcAcct,OU=tmcsm,DC=tanzu,DC=io"
password: "somevalue123!"
domainName: "acme-active-directory"
userBaseDN: "DC=tanzu,DC=io"
userSearchFilter: "(&(objectClass=person)(sAMAccountName={}))"
userSearchAttributeUsername: sAMAccountName
groupBaseDN: "DC=tanzu,DC=io"
groupSearchFilter: "(&(objectClass=group)(member={}))"
rootCA: |-
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
To troubleshoot AD and OpenLDAP authentication issues, see Troubleshooting AD and OpenLDAP authentication.
If you are importing your own TLS certificates, add the following to the values.yaml
file:
certificateImport: true
trustedCAs
map.Specifying a clusterIssuer
is optional. The following example shows the required values.
certificateImport: true
trustedCAs:
my-ca.pem: |-
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
# Additional configuration below this line as needed
duration
/renewBefore
for certificates managed by cert-manager (optional)Starting with version 1.4 of TMC Self-Managed, you can optionally configure a default duration
and renewBefore
for certificates. Certificates can be automatically reloaded by pods after renewal before expiration. The default values are 1 year for duration
and 30 days for renewBefore
if not otherwise configured. The TLS secret is automatically renewed by cert-manager before it expires.
The following example shows the default values in the configuration file.
certManager:
certificate:
duration: 8760h # The requested 'duration' (i.e. lifetime) of the Certificate
renewBefore: 720h # How long before the currently issued certificate's expiry cert-manager should renew
Starting with version 1.2 of TMC Self-Managed, you can optionally configure a default repository version and registry path for the Tanzu Standard package repository. Prior to version 1.2, TMC Self-Managed would configure the Tanzu Standard package repository version on managed TKG clusters, which could cause issues, such overwrite problems on Tanzu Kubernetes clusters with a pre-configured Tanzu Standard package repository version.
Before you install or upgrade your TMC Self-Managed deployment, add the following section to your values.yaml
file to specify the host, path, and name of the Tanzu Standard package repository you want to set as the default for this deployment.
tanzuStandard:
imageRegistry: <registry-hostname-with-ports-if-any>
relativePath: <relative/path-to/tz-std-repo>:<standard-repo-version>
imageRegistry
value specifies the port where the registry is listening. For example, the registry installed with TMC Self-Managed would look like this:registry.tanzu.io:8443
relativePath
value has a colon (:) separating the path and the version.imageRegistry
and relativePath
must form an accessible URL to the registry.After you have updated the values.yaml
file, you can proceed with the install or upgrade.
After downloading and staging the installation images and creating the values.yaml
file, launch the installer using the tanzu
CLI.
Use the following command to initiate the installation:
tanzu package install tanzu-mission-control -p "tmc.tanzu.vmware.com" --version "{{version}}" --values-file "{{/path-to/my-values.yaml}}" --namespace tmc-local
Where:
version
is the version of the Tanzu package repository with the Tanzu Mission Control Self-Managed images from step 3.4 under Download and stage the installation images./path-to/my-values.yaml
is the path to the values.yaml
file that you created in the previous section.If you had already configured the DNS type A records for Tanzu Mission Control Self-Managed as part of Configure a DNS Zone, you can skip this step.
If you have not configured the DNS type A records, you need to configure them depending on your DNS server. The list of type A records to configure can be found under Configure a DNS Zone. The load balancer IP for Tanzu Mission Control Self-Managed can be found by running the following command:
kubectl get svc contour-envoy -n tmc-local -oyaml | yq -e '.status.loadBalancer.ingress[0].ip'
NoteIf the DNS records are not configured, some services might not start up successfully and will be in
CrashLoopBackOff
state.
After the installation completes, you can open the Tanzu Mission Control console in a browser.
ImportantThe first user to log in to your Tanzu Mission Control Self-Managed deployment must belong to the
tmc:admin
group.
Open a browser and go to the URL of your Tanzu Mission Control Self-Managed deployment. The URL contains the DNS zone that you defined when you prepared the cluster for deployment, something like this:
https://tmc.<my-dns-zone>
For example, if you named the DNS zone tanzu.io
, then the URL for the Tanzu Mission Control console looks like this:
https://tmc.tanzu.io
The start page of the Tanzu Mission Control console prompts you to sign in.
Click Sign In. When you click Sign In, you are redirected to your upstream IDP to log in.
Log in with your IDP credentials.
NoteWhen you log out of the TMC Self-Managed console, only the TMC Self-Managed cookies are cleared. The upstream IDP cookies are not cleared.
The Tanzu Mission Control plug-ins for the Tanzu CLI allow you to create and manage Kubernetes clusters and cluster groups, namespaces and workspaces, data protection, and policies. For more information, see About VMware Tanzu CLI - Tanzu Mission Control Plug-ins.
NoteTanzu CLI 1.0 and later versions are supported.
Instruction for installing the Tanzu CLI and the Tanzu Mission Control plug-ins are provided in Install and Configure the Tanzu CLI and Tanzu Mission Control Plug-ins. However, there are some slight variations for a Tanzu Mission Control Self-Managed deployment. Therefore, use the following sequence of steps.
tanzu
CLI.Stage the Tanzu Mission Control plug-ins in your local repository.
To stage the plug-ins, log in to the local repository and run the following commands:
tanzu plugin upload-bundle --tar tmc/tanzu-cli-plugins/tmc.tar.gz --to-repo "local-repo-name"
tanzu plugin source update default -u "local-repo-name"/plugin-inventory:latest
NoteYou do not need an API token to create a context in a TMC Self-Managed deployment.
You can update configuration values for your TMC Self-Managed deployment without doing a complete reinstall. This procedure shows how to change the stack size of your TMC Self-Managed deployment. However, it is also applicable to the other configuration values.
If you want to change the size of the TMC stack, first make sure your cluster has the minimum configuration, as described in Preparing your cluster to host Tanzu Mission Control Self-Managed.
To update configuration values dynamically:
Open your values.yaml
and change the value of the size
key to medium
. For example,
size: medium
Save the file.
Retrieve the package version of your TMC Self-Managed deployment.
tanzu package installed get tanzu-mission-control | grep PACKAGE-VERSION
Use the following command to update the deployment with the new configuration values.
tanzu package installed update tanzu-mission-control -p tmc.tanzu.vmware.com --version <package-version> --values-file /home/kubo/values.yaml --namespace tmc-local
Make sure you replace <package-version>
and /home/kubo/values.yaml
with the appropriate values, before running the command.
After the update completes, verify that all deployments are updated.
For example, the number of pods for api-gateway-server
is 2 with the small
stack size, and 3 with the medium
stack size.
To remove Tanzu Mission Control Self-Managed and its artifacts from you cluster, use the tanzu
cli.
Back up any data that you do not want to lose.
Run the following commands:
tanzu package installed delete tanzu-mission-control --namespace tmc-local
tanzu package repository delete tanzu-mission-control-packages --namespace tmc-local
If necessary, delete residual resources.
The above commands clean up most of the resources that were created by the tanzu-mission-control
Tanzu package. However, there are some resources that you have to remove manually. The resources include:
Alternatively, you can delete the tmc-local
namespace. When you delete the tmc-local
namespace, the persistent volume claims associated with the namespace are deleted. Make sure you have already backed up any data that you don’t want to lose.