After you have prepared your cluster to run Tanzu Mission Control Self-Managed, as described in Preparing Your Cluster to Host Tanzu Mission Control Self-Managed, you can proceed with the installation.
NoteIf you are running a Beta deployment, you cannot upgrade the Beta deployment to GA. The GA installation must be a fresh installation.
Download, extract, and stage the installer for Tanzu Mission Control Self-Managed.
Download the installer to the bootstrap computer.
You can download the installer from the Customer Connect download site.
Create a directory on the bootstrap computer and extract the tarball into the directory. For example:
mkdir tanzumc
tar -xf tmc-self-managed-1.0.0.tar -C ./tanzumc
Stage the installation images in your container registry.
Create a public project in your container registry with at least 10 GB of storage quota. If possible, set the quota to unlimited for ease of testing.
For example: my-harbor-instance/harbor-project
/etc/ssl/certs
path of the jumpbox for system-wide use. This enables the image push to the Harbor repository. tanzumc/tmc-sm push-images harbor --help
Push the images to the registry.
tanzumc/tmc-sm push-images harbor --project {{harbor-instance}}/{{harbor-project}} --username {{username}} --password {{password}}
After the command runs successfully, you will see pushed-package-repository.json
file created in the tanzumc
directory with the following contents:
{"repositoryImage":"{{harbor-instance}}/{{harbor-project}}/package-repository","version":"1.0.0"}
You will need the repositoryImage
and version
from this JSON file to stage the TMC Self-Managed Tanzu packages on your cluster.
Create the tmc-local
namespace and add the Tanzu package repository to the workload cluster where you want to install Tanzu Mission Control Self-Managed.
If your cluster was created using vSphere with Tanzu on v7.x, do the steps in Prepare a Workload Cluster Created by Using vSphere with Tanzu to Run Packages.
If your cluster was deployed on a Supervisor on vSphere 8.x using TKG or by a standalone TKG management cluster, you can skip this step, because kapp-controller
is installed by default in the cluster.
Create the tmc-local
namespace. All the artifacts for the Tanzu Mission Control Self-Managed service will be installed to this namespace.
kubectl create namespace tmc-local
Add the Tanzu package repository to your cluster in the tmc-local
namespace.
tanzu package repository add tanzu-mission-control-packages --url "{{repositoryImage}}:{{version}}" --namespace tmc-local
Use the exact repositoryImage
and version
from the output after you pushed the images to the registry in Download and stage the installation images.
Wait for the kapp-controller
to reconcile the Tanzu packages in the repository.
You can check the reconciliation status using the following command.
tanzu package repository list --namespace tmc-local
Create a values.yaml
file that contains the key-values for your configuration. For the complete list of key-values you can use, see Configuration key values for Tanzu Mission Control Self-Managed.
NoteIn the password fields, enclose strings containing special characters, such as
@
and#
, in quotes. Otherwise, the special characters can cause the installation to fail. For example,userPassword: "ad#min"
.
Inspect the values schema.
tanzu package available get "tmc.tanzu.vmware.com/{{version}}" --namespace tmc-local --values-schema
Use the same version
used in the previous section.
Enter values for each of the keys based on your configuration and save your changes.
NoteThis step might require you to look-up some of the configuration from Preparing your cluster for Tanzu Mission Control Self-Managed
The following is a sample YAML that uses a preferred load balancer IP with Avi Kubernetes Operator and Okta as the OIDC IDP.
harborProject: harbor.tanzu.io/tmc
dnsZone: tmc.tanzu.io
clusterIssuer: local-issuer
postgres:
userPassword: <postgres-admin-password>
maxConnections: 300
minio:
username: root
password: <minio-admin-password>
contourEnvoy:
serviceType: LoadBalancer
serviceAnnotations: # needed only when specifying load balancer controller specific config like preferred IP
ako.vmware.com/load-balancer-ip: "10.20.10.100"
# when using an auto-assigned IP instead of a preferred IP, please use the following key instead of the serviceAnnotations above
# loadBalancerClass: local
oidc:
issuerType: pinniped
issuerURL: https://dev.okta.com/oauth2/default
clientID: <okta-client-id>
clientSecret: <okta-client-secret>
trustedCAs:
local-ca.pem: | # root CA cert of the cluster issuer in cert-manager, if not a well-known CA
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
harbor-ca.pem: | # root CA cert of Harbor, if not a well-known CA and if different from the local-ca.pem
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
idp-ca.pem: | # root CA cert of the IDP, if not a well-known CA and if different from the local-ca.pem
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
alertmanager: # needed only if you want to turn on alerting
criticalAlertReceiver:
slack_configs:
- send_resolved: false
api_url: https://hooks.slack.com/services/...
channel: '#<slack-channel-name>'
telemetry:
ceipOptIn: true
eanNumber: <vmware-ean> # if EAN is available
ceipAgreement: true
After downloading and staging the installation images and creating the values.yaml
file, launch the installer using the tanzu
CLI.
Use the following command to initiate the installation:
tanzu package install tanzu-mission-control -p "tmc.tanzu.vmware.com" --version "{{version}}" --values-file "{{/path-to/my-values.yaml}}" --namespace tmc-local
where :
version
is the version of the Tanzu package repository with the Tanzu Mission Control Self-Managed images from step 3.4 under Download and stage the installation images./path-to/my-values.yaml
is the path to the values.yaml
file that you created in the previous section.If you had already configured the DNS type A records for Tanzu Mission Control Self-Managed as part of Configure a DNS Zone, you can skip this step.
If you have not configured the DNS type A records, you need to configure them depending on your DNS server. The list of type A records to configure can be found under Configure a DNS Zone. The load balancer IP for Tanzu Mission Control Self-Managed can be found by running the following command:
kubectl get svc contour-envoy -n tmc-local -oyaml | yq -e '.status.loadBalancer.ingress[0].ip'
Note: Until the DNS records are configured some services may not startup successfully and would be in
CrashLoopBackOff
state.
After the installation completes, you can open the Tanzu Mission Control console in a browser.
ImportantThe first user to log in to your Tanzu Mission Control Self-Managed deployment must belong to the
tmc:admin
group.
Open a browser and go to the URL of your Tanzu Mission Control Self-Managed deployment. The URL contains the DNS zone that you defined when you prepared the cluster for deployment, something like this:
https://tmc.<my-dns-zone>
For example, if you named the DNS zone tanzu.io
, then the URL for the Tanzu Mission Control console looks like this:
https://tmc.tanzu.io
The start page of the Tanzu Mission Control console prompts you to sign in.
Click Sign In. When you click Sign In, you are redirected to your upstream IDP to log in.
Log in with your IDP credentials.
NoteWhen you log out of the TMC Self-Managed console, only the TMC Self-Managed cookies are cleared. The upstream IDP cookies are not cleared.
If you want to use the command line to interact with Tanzu Mission Control Self-Managed, download the CLI binary to your bootstrap.
NoteYou must have an Internet connection to download the CLI binary and you must have logged in at least once using the UI before logging in using the CLI.
Follow the instructions in Log In with the Tanzu Mission Control CLI to download and install the tmc
CLI.
Alternatively, you can use the following CLI commands.
Locate the supported binary for your system using curl
.
Replace my-dns-zone
with the DNS zone you defined.
curl https://my-dns-zone/v1alpha1/system/binaries
You will see an output similar to the following output:
{"latestVersion":"0.5.3-defeac11","versions":{"0.5.3-defeac11":{
"darwinX64":"https://tmc-cli.s3-us-west-2.amazonaws.com/tmc/0.5.3-defeac11/darwin/x64/tmc",
"linuxX64":"https://tmc-cli.s3-us-west-2.amazonaws.com/tmc/0.5.3-defeac11/linux/x64/tmc",
"windowsX64":"https://tmc-cli.s3-us-west-2.amazonaws.com/tmc/0.5.3-defeac11/windows/x64/tmc"}}}
Download the binary using wget
.
Replace <my-os-binary>
with the quoted URL for your operating system.
wget -O /usr/local/bin/tmc <my-os-binary>
For example:
wget -O /usr/local/bin/tmc "https://tmc-cli.s3-us-west-2.amazonaws.com/tmc/0.5.3-defeac11/linux/x64/tmc"
After you have successfully installed Tanzu Mission Control Self-Managed, you can copy the Tanzu Standard package and the third-party Sonobouy inspection scan images to your private image registry. For more information, see Copying Tanzu Standard and conformance images.
To remove Tanzu Mission Control Self-Managed and its artifacts from you cluster, use the tanzu
cli.
Back up any data that you do not want to lose.
Run the following commands:
tanzu package installed delete tanzu-mission-control --namespace tmc-local
tanzu package repository delete tanzu-mission-control-packages --namespace tmc-local
If necessary, delete residual resources.
The above commands clean up most of the resources that were created by the tanzu-mission-control
Tanzu package. However, there are some resources that you have to remove manually. The resources include:
Alternatively, you can delete the tmc-local
namespace. When you delete the tmc-local
namespace, the persistent volume claims associated with the namespace are deleted. Make sure you have already backed up any data that you don’t want to lose.