After you have prepared your cluster to run Tanzu Mission Control Self-Managed, as described in Preparing Your Cluster to Host Tanzu Mission Control Self-Managed, you can proceed with the installation.
NoteIf you are running a Beta deployment, you cannot upgrade the Beta deployment to GA. The GA installation must be a fresh installation.
Download, extract, and stage the installer for Tanzu Mission Control Self-Managed.
Download the installer to the bootstrap computer.
You can download the installer from the Customer Connect download site.
Create a directory on the bootstrap computer and extract the tarball into the directory. For example:
mkdir tanzumc
tar -xf tmc-self-managed-1.1.0.tar -C ./tanzumc
Stage the installation images in your container registry.
Create a public project in your container registry with at least 10 GB of storage quota. If possible, set the quota to unlimited for ease of testing.
For example: my-harbor-instance/harbor-project
/etc/ssl/certs
path of the jumpbox for system-wide use. This enables the image push to the Harbor repository. tanzumc/tmc-sm push-images harbor --help
Push the images to the registry.
tanzumc/tmc-sm push-images harbor --project {{harbor-instance}}/{{harbor-project}} --username {{username}} --password {{password}}
After the command runs successfully, you will see the pushed-package-repository.json
file created in the tanzumc
directory with the following contents:
{"repositoryImage":"{{harbor-instance}}/{{harbor-project}}/package-repository","version":"1.1.0"}
You will need the repositoryImage
and version
from the JSON file to stage the TMC Self-Managed Tanzu packages on your cluster.
Create the tmc-local
namespace and add the Tanzu package repository to the workload cluster on which you want to install Tanzu Mission Control Self-Managed.
If your cluster was created using vSphere with Tanzu on v7.x, do the steps in Prepare a Workload Cluster Created by Using vSphere with Tanzu to Run Packages.
If your cluster was deployed on a Supervisor on vSphere 8.x using TKG or by a standalone TKG management cluster, you can skip this step, because kapp-controller
is installed by default in the cluster.
Create the tmc-local
namespace. All the artifacts for the Tanzu Mission Control Self-Managed service will be installed to this namespace.
kubectl create namespace tmc-local
kubectl label ns tmc-local pod-security.kubernetes.io/enforce=privileged
(Optional) If you are importing your own certificates, create the required Kubernetes secrets for each of the secrets listed in Importing certificates in Set up TLS.
kubectl create secret tls <SECRET_NAME> --key=”KEY-FILE-NAME.key” --cert=”CERT-FILE-NAME.crt” -n tmc-local
Where:
KEY-FILE-NAME
is the name of the key
file that your certificate issuer gave you corresponding to the appropriate secretCERT-FILE-NAME
is the name of the crt
file that your certificate issuer gave you corresponding to the appropriate secretAdd the Tanzu package repository to your cluster in the tmc-local
namespace.
tanzu package repository add tanzu-mission-control-packages --url "{{repositoryImage}}:{{version}}" --namespace tmc-local
Use the exact repositoryImage
and version
from the output after you pushed the images to the registry in Download and stage the installation images.
Wait for the kapp-controller
to reconcile the Tanzu packages in the repository.
You can check the reconciliation status using the following command.
tanzu package repository list --namespace tmc-local
Create a values.yaml
file that contains the key-values for your configuration. For the complete list of key-values you can use, see Configuration key values for Tanzu Mission Control Self-Managed.
NoteIn the password fields, enclose strings containing special characters, such as
@
and#
, in quotes. Otherwise, the special characters can cause the installation to fail. For example,userPassword: "ad#min"
.
Inspect the values schema.
tanzu package available get "tmc.tanzu.vmware.com/{{version}}" --namespace tmc-local --values-schema
Use the same version
used in the previous section.
Enter values for each of the keys based on your configuration and save your changes.
NoteThis step might require you to look-up some of the configuration from Preparing your cluster for Tanzu Mission Control Self-Managed
The following is a sample YAML that uses a preferred load balancer IP with Avi Kubernetes Operator and Okta as the OIDC IdP. If you are authenticating using Active Directory (AD) or OpenLDAP, see the examples in Authentication with AD or OpenLDAP.
harborProject: harbor.tanzu.io/tmc
dnsZone: tmc.tanzu.io
clusterIssuer: local-issuer
postgres:
userPassword: <postgres-admin-password>
maxConnections: 300
minio:
username: root
password: <minio-admin-password>
contourEnvoy:
serviceType: LoadBalancer
serviceAnnotations: # needed only when specifying load balancer controller specific config like preferred IP
ako.vmware.com/load-balancer-ip: "10.20.10.100"
# when using an auto-assigned IP instead of a preferred IP, please use the following key instead of the serviceAnnotations above
# loadBalancerClass: local
oidc:
issuerType: pinniped
issuerURL: https://dev.okta.com/oauth2/default
clientID: <okta-client-id>
clientSecret: <okta-client-secret>
trustedCAs:
local-ca.pem: | # root CA cert of the cluster issuer in cert-manager, if not a well-known CA
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
harbor-ca.pem: | # root CA cert of Harbor, if not a well-known CA and if different from the local-ca.pem
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
idp-ca.pem: | # root CA cert of the IDP, if not a well-known CA and if different from the local-ca.pem
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
alertmanager: # needed only if you want to turn on alerting
criticalAlertReceiver:
slack_configs:
- send_resolved: false
api_url: https://hooks.slack.com/services/...
channel: '#<slack-channel-name>'
telemetry:
ceipOptIn: true
eanNumber: <vmware-ean> # if EAN is available
ceipAgreement: true
size: small
If you are using your organization’s Active Directory (AD) or OpenLDAP credentials, add the key-value authenticationType:ldap
and replace the OIDC IdP configuration with configuration for your AD or OpenLDAP.
ImportantMake sure that the
The following example uses OpenLDAP.
authenticationType: ldap
oidc:
issuerType: pinniped
issuerURL: https://pinniped-supervisor.tmc.tanzu.io/provider/pinniped
ldap:
type: "ldap"
host: "ldap.openldap.svc.cluster.local"
username: "cn=pinniped guest,dc=pinniped,dc=dev"
password: "somevalue123"
domainName: "in-cluster-openldap"
userBaseDN: "ou=users,dc=pinniped,dc=dev"
userSearchFilter: "(&(objectClass=person)(sAMAccountName={}))"
groupBaseDN: "ou=users,dc=pinniped,dc=dev"
groupSearchFilter: "(&(objectClass=group)(member={}))"
rootCA: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
The following example uses AD.
authenticationType: ldap
oidc:
issuerType: pinniped
issuerURL: https://pinniped-supervisor.tmc.tanzu.io/provider/pinniped
idpGroupRoles:
admin: tmc-admin
member: tmc-member
ldap:
type: activedirectory
host: "dc01.tanzu.io"
username: "CN=Pinniped SvcAcct,OU=tmcsm,DC=tanzu,DC=io"
password: "somevalue123!"
domainName: "acme-active-directory"
userBaseDN: "DC=tanzu,DC=io"
userSearchFilter: "(&(objectClass=person)(sAMAccountName={}))"
groupBaseDN: "DC=tanzu,DC=io"
groupSearchFilter: "(&(objectClass=group)(member={}))"
rootCA: |-
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
To troubleshoot AD and OpenLDAP authentication issues, see Troubleshooting AD and OpenLDAP authentication.
If you are importing your own TLS certificates, add the following to the values.yaml
file:
certificateImport: true
trustedCAs
map.Specifying a clusterIssuer
is optional. The following example shows the required values.
certificateImport: true
trustedCAs:
my-ca.pem: |-
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
# Additional configuration below this line as needed
After downloading and staging the installation images and creating the values.yaml
file, launch the installer using the tanzu
CLI.
Use the following command to initiate the installation:
tanzu package install tanzu-mission-control -p "tmc.tanzu.vmware.com" --version "{{version}}" --values-file "{{/path-to/my-values.yaml}}" --namespace tmc-local
Where:
version
is the version of the Tanzu package repository with the Tanzu Mission Control Self-Managed images from step 3.4 under Download and stage the installation images./path-to/my-values.yaml
is the path to the values.yaml
file that you created in the previous section.If you had already configured the DNS type A records for Tanzu Mission Control Self-Managed as part of Configure a DNS Zone, you can skip this step.
If you have not configured the DNS type A records, you need to configure them depending on your DNS server. The list of type A records to configure can be found under Configure a DNS Zone. The load balancer IP for Tanzu Mission Control Self-Managed can be found by running the following command:
kubectl get svc contour-envoy -n tmc-local -oyaml | yq -e '.status.loadBalancer.ingress[0].ip'
NoteIf the DNS records are not configured, some services might not start up successfully and will be in
CrashLoopBackOff
state.
After the installation completes, you can open the Tanzu Mission Control console in a browser.
ImportantThe first user to log in to your Tanzu Mission Control Self-Managed deployment must belong to the
tmc:admin
group.
Open a browser and go to the URL of your Tanzu Mission Control Self-Managed deployment. The URL contains the DNS zone that you defined when you prepared the cluster for deployment, something like this:
https://tmc.<my-dns-zone>
For example, if you named the DNS zone tanzu.io
, then the URL for the Tanzu Mission Control console looks like this:
https://tmc.tanzu.io
The start page of the Tanzu Mission Control console prompts you to sign in.
Click Sign In. When you click Sign In, you are redirected to your upstream IDP to log in.
Log in with your IDP credentials.
NoteWhen you log out of the TMC Self-Managed console, only the TMC Self-Managed cookies are cleared. The upstream IDP cookies are not cleared.
The Tanzu Mission Control plug-ins for the Tanzu CLI allow you to create and manage Kubernetes clusters and cluster groups, namespaces and workspaces, data protection, and policies. For more information, see About VMware Tanzu CLI - Tanzu Mission Control Plug-ins.
NoteTanzu CLI 1.0 and later versions are supported.
Instruction for installing the Tanzu CLI and the Tanzu Mission Control plug-ins are provided in Install and Configure the Tanzu CLI and Tanzu Mission Control Plug-ins. However, there are some slight variations for a Tanzu Mission Control Self-Managed deployment. Therefore, use the following sequence of steps.
tanzu
CLI.Stage the Tanzu Mission Control plug-ins in your local repository.
To stage the plug-ins, log in to the local repository and run the following commands:
tanzu plugin upload-bundle --tar tmc/tanzu-cli-plugins/tmc.tar.gz --to-repo "local-repo-name"
tanzu plugin source update default -u "local-repo-name"
NoteYou do not need an API token to create a context in a TMC Self-Managed deployment.
You can skip this step if you installed the Tanzu CLI and Tanzu Mission Control plug-ins.
If you want to use the command line to interact with Tanzu Mission Control Self-Managed, download the CLI binary to your bootstrap.
NoteYou must have an Internet connection to download the CLI binary and you must have logged in at least once using the UI before logging in using the CLI.
Follow the instructions in Log In with the Tanzu Mission Control CLI to download and install the tmc
CLI.
Alternatively, you can use the following CLI commands.
Locate the supported binary for your system using curl
.
Replace my-dns-zone
with the DNS zone you defined.
curl https://my-dns-zone/v1alpha1/system/binaries
You will see an output similar to the following output:
{"latestVersion":"0.5.3-defeac11","versions":{"0.5.3-defeac11":{
"darwinX64":"https://tmc-cli.s3-us-west-2.amazonaws.com/tmc/0.5.3-defeac11/darwin/x64/tmc",
"linuxX64":"https://tmc-cli.s3-us-west-2.amazonaws.com/tmc/0.5.3-defeac11/linux/x64/tmc",
"windowsX64":"https://tmc-cli.s3-us-west-2.amazonaws.com/tmc/0.5.3-defeac11/windows/x64/tmc"}}}
Download the binary using wget
.
Replace <my-os-binary>
with the quoted URL for your operating system.
wget -O /usr/local/bin/tmc <my-os-binary>
For example:
wget -O /usr/local/bin/tmc "https://tmc-cli.s3-us-west-2.amazonaws.com/tmc/0.5.3-defeac11/linux/x64/tmc"
To remove Tanzu Mission Control Self-Managed and its artifacts from you cluster, use the tanzu
cli.
Back up any data that you do not want to lose.
Run the following commands:
tanzu package installed delete tanzu-mission-control --namespace tmc-local
tanzu package repository delete tanzu-mission-control-packages --namespace tmc-local
If necessary, delete residual resources.
The above commands clean up most of the resources that were created by the tanzu-mission-control
Tanzu package. However, there are some resources that you have to remove manually. The resources include:
Alternatively, you can delete the tmc-local
namespace. When you delete the tmc-local
namespace, the persistent volume claims associated with the namespace are deleted. Make sure you have already backed up any data that you don’t want to lose.