When you deploy Tanzu Kubernetes (workload) clusters to vSphere, you must specify options in the cluster configuration file to connect to vCenter Server and identify the vSphere resources that the cluster will use. You can also specify standard sizes for the control plane and worker node VMs, or configure the CPU, memory, and disk sizes for control plane and worker nodes explicitly. If you use custom image templates, you can identify which template to use to create node VMs.
For the basic process of deploying workload clusters, see Deploy a Workload Cluster: Basic Process.
After you deploy a workload cluster to vSphere you must configure the IP addresses of its control plane endpoint and its nodes to be static, as described in Configure DHCP Reservations for the Control Plane Endpoint and All Nodes (vSphere Only).
For the full list of options that you must specify when deploying workload clusters to vSphere, see the Tanzu CLI Configuration File Variable Reference.
The template below includes all of the options that are relevant to deploying workload clusters on vSphere. You can copy this template and update it to deploy workload clusters to vSphere.
Mandatory options are uncommented. Optional settings are commented out. Default values are included where applicable.
With the exception of the options described in the sections below the template, the way in which you configure the variables for workload clusters that are specific to vSphere is identical for both management clusters and workload clusters. For information about how to configure the variables, see Create a Management Cluster Configuration File and Management Cluster Configuration for vSphere.
#! ---------------------------------------------------------------------
#! Basic cluster creation configuration
#! ---------------------------------------------------------------------
# CLUSTER_NAME:
CLUSTER_PLAN: dev
NAMESPACE: default
# CLUSTER_API_SERVER_PORT: # For deployments without NSX Advanced Load Balancer
CNI: antrea
IDENTITY_MANAGEMENT_TYPE: oidc
#! ---------------------------------------------------------------------
#! Node configuration
#! ---------------------------------------------------------------------
# SIZE:
# CONTROLPLANE_SIZE:
# WORKER_SIZE:
# VSPHERE_NUM_CPUS: 2
# VSPHERE_DISK_GIB: 40
# VSPHERE_MEM_MIB: 4096
# VSPHERE_CONTROL_PLANE_NUM_CPUS: 2
# VSPHERE_CONTROL_PLANE_DISK_GIB: 40
# VSPHERE_CONTROL_PLANE_MEM_MIB: 8192
# VSPHERE_WORKER_NUM_CPUS: 2
# VSPHERE_WORKER_DISK_GIB: 40
# VSPHERE_WORKER_MEM_MIB: 4096
# CONTROL_PLANE_MACHINE_COUNT: 1
# WORKER_MACHINE_COUNT: 1
# WORKER_MACHINE_COUNT_0:
# WORKER_MACHINE_COUNT_1:
# WORKER_MACHINE_COUNT_2:
#! ---------------------------------------------------------------------
#! vSphere configuration
#! ---------------------------------------------------------------------
VSPHERE_NETWORK: VM Network
# VSPHERE_TEMPLATE:
VSPHERE_SSH_AUTHORIZED_KEY:
VSPHERE_USERNAME:
VSPHERE_PASSWORD:
VSPHERE_SERVER:
VSPHERE_DATACENTER:
VSPHERE_RESOURCE_POOL:
VSPHERE_DATASTORE:
# VSPHERE_STORAGE_POLICY_ID
VSPHERE_FOLDER:
VSPHERE_TLS_THUMBPRINT:
VSPHERE_INSECURE: false
# VSPHERE_CONTROL_PLANE_ENDPOINT: # Required for Kube-Vip
# VSPHERE_CONTROL_PLANE_ENDPOINT_PORT: 6443
AVI_CONTROL_PLANE_HA_PROVIDER:
#! ---------------------------------------------------------------------
#! NSX specific configuration for enabling NSX routable pods
#! ---------------------------------------------------------------------
# NSXT_POD_ROUTING_ENABLED: false
# NSXT_ROUTER_PATH: ""
# NSXT_USERNAME: ""
# NSXT_PASSWORD: ""
# NSXT_MANAGER_HOST: ""
# NSXT_ALLOW_UNVERIFIED_SSL: false
# NSXT_REMOTE_AUTH: false
# NSXT_VMC_ACCESS_TOKEN: ""
# NSXT_VMC_AUTH_HOST: ""
# NSXT_CLIENT_CERT_KEY_DATA: ""
# NSXT_CLIENT_CERT_DATA: ""
# NSXT_ROOT_CA_DATA: ""
# NSXT_SECRET_NAME: "cloud-provider-vsphere-nsxt-credentials"
# NSXT_SECRET_NAMESPACE: "kube-system"
#! ---------------------------------------------------------------------
#! Machine Health Check configuration
#! ---------------------------------------------------------------------
ENABLE_MHC:
ENABLE_MHC_CONTROL_PLANE: true
ENABLE_MHC_WORKER_NODE: true
MHC_UNKNOWN_STATUS_TIMEOUT: 5m
MHC_FALSE_STATUS_TIMEOUT: 12m
#! ---------------------------------------------------------------------
#! Common configuration
#! ---------------------------------------------------------------------
# TKG_CUSTOM_IMAGE_REPOSITORY: ""
# TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: ""
# TKG_HTTP_PROXY: ""
# TKG_HTTPS_PROXY: ""
# TKG_NO_PROXY: ""
ENABLE_AUDIT_LOGGING: true
ENABLE_DEFAULT_STORAGE_CLASS: true
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13
# OS_NAME: ""
# OS_VERSION: ""
# OS_ARCH: ""
#! ---------------------------------------------------------------------
#! Autoscaler configuration
#! ---------------------------------------------------------------------
ENABLE_AUTOSCALER: false
# AUTOSCALER_MAX_NODES_TOTAL: "0"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_ADD: "10m"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_DELETE: "10s"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_FAILURE: "3m"
# AUTOSCALER_SCALE_DOWN_UNNEEDED_TIME: "10m"
# AUTOSCALER_MAX_NODE_PROVISION_TIME: "15m"
# AUTOSCALER_MIN_SIZE_0:
# AUTOSCALER_MAX_SIZE_0:
# AUTOSCALER_MIN_SIZE_1:
# AUTOSCALER_MAX_SIZE_1:
# AUTOSCALER_MIN_SIZE_2:
# AUTOSCALER_MAX_SIZE_2:
#! ---------------------------------------------------------------------
#! Antrea CNI configuration
#! ---------------------------------------------------------------------
# ANTREA_NO_SNAT: false
# ANTREA_TRAFFIC_ENCAP_MODE: "encap"
# ANTREA_PROXY: false
# ANTREA_POLICY: true
# ANTREA_TRACEFLOW: false
If you are using a single custom OVA image for each version of Kubernetes to deploy clusters on one operating system, follow Deploy Tanzu Kubernetes Clusters with Different Kubernetes Versions. In that procedure, you import the OVA into vSphere and then specify it for tanzu cluster create
with the --tkr
option.
If you are using multiple custom OVA images for the same Kubernetes version, then the --tkr
value is ambiguous. This happens when the OVAs for the same Kubernetes version:
make build-node-ova-vsphere-ubuntu-1804
, make build-node-ova-vsphere-photon-3
, and make build-node-ova-vsphere-rhel-7
.To resolve this ambiguity, set the VSPHERE_TEMPLATE
option to the desired OVA image before you run tanzu cluster create
.
If the OVA template image name is unique, set VSPHERE_TEMPLATE
to just the image name.
If multiple images share the same name, set VSPHERE_TEMPLATE
to the full inventory path of the image in vCenter. This path follows the form /MY-DC/vm/MY-FOLDER-PATH/MY-IMAGE
, where:
MY_DC
is the datacenter containing the OVA template imageMY_FOLDER_PATH
is the path to the image from the datacenter, as shown in the vCenter VMs and Templates viewMY_IMAGE
is the image nameFor example:
VSPHERE_TEMPLATE: "/TKG_DC/vm/TKG_IMAGES/ubuntu-2004-kube-v1.22.9-vmware.1"
You can determine the image’s full vCenter inventory path manually or use the govc
CLI:
govc
. For installation instructions, see the govmomi repository on GitHub.govc
to access your vCenter:
export GOVC_USERNAME=VCENTER-USERNAME
export GOVC_PASSWORD=VCENTER-PASSWORD
export GOVC_URL=VCENTER-URL
export GOVC_INSECURE=1
govc find / -type m
and find the image name in the output, which lists objects by their complete inventory paths.For more information about custom OVA images, see Build Machine Images.
After you add a vSphere with Tanzu Supervisor Cluster to the Tanzu CLI, as described in Add a vSphere with Tanzu Supervisor Cluster as a Management Cluster, you can deploy Tanzu Kubernetes clusters optimized for vSphere 7 by configuring cluster parameters and creating the cluster as described in the sections below.
Configure the Tanzu Kubernetes clusters that the tanzu
CLI calls the Supervisor Cluster to create:
Obtain information about the storage classes that are defined in the Supervisor Cluster.
kubectl get storageclasses
Set variables to define the storage classes, VM classes, service domain, namespace, and other required values with which to create your cluster. For information about all of the configuration parameters that you can set when deploying Tanzu Kubernetes clusters to vSphere with Tanzu, see Configuration Parameters for Provisioning Tanzu Kubernetes Clusters in the vSphere with Tanzu documentation.
The following table lists the required variables:
Required Variables | ||
---|---|---|
Variable | Value Type or Example | Description |
INFRASTRUCTURE_PROVIDER |
tkg-service-vsphere |
Always tkg-service-vsphere for TanzuKubernetesCluster objects on vSphere with Tanzu. |
CLUSTER_PLAN |
dev , prod , or a custom plan |
Sets node counts. |
CLUSTER_CIDR |
CIDR range | The CIDR range to use for pods. The recommended range is 100.96.0.0/11 . Change this value only if the recommended range is unavailable. |
SERVICE_CIDR |
The CIDR range to use for the Kubernetes services. The recommended range is 100.64.0.0/13 . Change this value only if the recommended range is unavailable. |
|
SERVICE_DOMAIN |
Domain | e.g. my.example.com , or cluster.local if no DNS. If you are going to assign FQDNs with the nodes, DNS lookup is required. |
CONTROL_PLANE_VM_CLASS |
A standard VM class for vSphere with Tanzu, for example guaranteed-large .See Virtual Machine Class Types for Tanzu Kubernetes Clusters in the vSphere with Tanzu documentation. |
VM class for control plane nodes |
WORKER_VM_CLASS |
VM class for worker nodes | |
Optional Variables | ||
Variable | Value Type or Example | Description |
CLUSTER_NAME |
String | Overridden by the CLUSTER_NAME argument passed to tanzu cluster create .This name must comply with DNS hostname requirements as outlined in RFC 952 and amended in RFC 1123, and must be 42 characters or less. |
NAMESPACE |
Namespace; defaults to default . |
The namespace in which to deploy the cluster. |
CONTROL_PLANE_MACHINE_COUNT |
Integer; CONTROL_PLANE_MACHINE_COUNT must be an odd number.Defaults to 1 for dev and 3 for prod , as set by CLUSTER_PLAN . |
Deploy a workload cluster with more control plane nodes than the dev or prod plan default. |
WORKER_MACHINE_COUNT |
Deploy a workload cluster with more worker nodes than the dev or prod plan default. |
|
STORAGE_CLASSES |
Empty string ““ lets clusters use any storage classes in the namespace, or comma-separated list string of values from kubectl get storageclasses , e.g.“SC-1,SC-2,SC-3” |
Storage classes available for node customization |
DEFAULT_STORAGE_CLASS |
Empty string ”” for no default, or value from CLI, as above. |
Default storage class for control plane or workers |
CONTROL_PLANE_STORAGE_CLASS |
Value returned from kubectl get storageclasses |
Default storage class for control plane nodes |
WORKER_STORAGE_CLASS |
Default storage class for worker nodes | |
NODE_POOL_0_NAME |
String | Node pool name, labels, and taints. A TanzuKubernetesCluster can only have one node pool. |
NODE_POOL_0_LABELS |
JSON list of strings, for example [“label1”, “label2”] |
|
NODE_POOL_0_TAINTS |
JSON list of key-value pair strings, for example [{“key1”: “value1”}, {“key2”: “value2”}] |
You can set the variables above by doing either of the following:
Include them in the cluster configuration file passed to the tanzu
CLI --file
option. For example:
CONTROL_PLANE_VM_CLASS: guaranteed-large
From command line, set them as local environment variables by running export
(on Linux and macOS) or SET
(on Windows) on the command line. For example:
export CONTROL_PLANE_VM_CLASS=guaranteed-large
Note: If you want to configure unique proxy settings for a Tanzu Kubernetes cluster, you can set TKG_HTTP_PROXY
, TKG_HTTPS_PROXY
, and NO_PROXY
as environment variables and then use the Tanzu CLI to create the cluster. These variables take precedence over your existing proxy configuration in vSphere with Tanzu.
Run tanzu cluster create
to create the vSphere 7 cluster.
Determine the versioned Tanzu Kubernetes release (TKr) for the cluster:
Obtain the list of TKr that are available in the Supervisor Cluster.
tanzu kubernetes-release get
From the command output, record the desired value listed under NAME
, for example v1.22.9---vmware.1-tkg.1.a87f261
. The tkr
NAME
is the same as its VERSION
but with +
changed to ---
.
Determine the namespace for the cluster.
Obtain the list of namespaces.
kubectl get namespaces
From the command output, record the namespace that includes the Supervisor Cluster, for example test-gc-e2e-demo-ns
.
Decide on the cluster plan: dev
, prod
, or a custom plan.
~/.config/tanzu/tkg/providers/infrastructure-tkg-service-vsphere
directory. See Configure Tanzu Kubernetes Plans and Clusters for details.Run tanzu cluster create
with the namespace and tkr
NAME
values above to create a Tanzu Kubernetes cluster:
tanzu cluster create my-vsphere7-cluster --tkr=TKR-NAME
The output of this command might show errors related to running machinehealthcheck
and accessing the clusterresourceset
resources. You can ignore those errors.
You can specify a region and zone for your workload cluster, to integrate it with region and zone tags configured for vSphere CSI (Cloud Storage Interface). For clusters that span multiple zones, this lets worker nodes find and use shared storage, even if they run in zones that have no storage pods, for example in a telecommunications Radio Access Network (RAN).
To deploy a workload cluster with region and zone tags that enable shared storage with vSphere CSI:
Create tags on vCenter Server
k8s-region
and k8s-zone
.Follow Create a Tag to create tags within the region and zone categories in the datacenter, as shown in this table:
Category | Tags |
---|---|
k8s-zone |
zone-a zone-b zone-c |
k8s-region |
region-1 |
Create corresponding tags to the clusters to the datacenter following Assign or Remove a Tag as indicated in the table.
vSphere Objects | Tags |
datacenter |
region-1 |
cluster1 |
zone-a |
cluster2 |
zone-b |
cluster3 |
zone-c |
To enable custom regions and zones for a vSphere workload cluster’s CSI driver, set the variables VSPHERE_REGION
and VSPHERE_ZONE
in the cluster configuration file to the tags above. For example:
VSPHERE_REGION: region-1
VSPHERE_ZONE: zone-a
When the tanzu
CLI creates a workload cluster with these variables set, it labels each cluster node with the topology keys failure-domain.beta.kubernetes.io/zone
and failure-domain.beta.kubernetes.io/region
.
Run tanzu cluster create
to create the workload cluster, as described in Deploy Tanzu Kubernetes Clusters.
After you create the cluster, and with the kubectl
context set to the cluster, you can check the region and zone labels by doing one of the following:
Run kubectl get nodes -L failure-domain.beta.kubernetes.io/zone -L failure-domain.beta.kubernetes.io/region
and confirm that the output lists the cluster nodes.
Run kubectl get csinodes -o jsonpath='{range .items\[\*\]}{.metadata.name} {.spec}{"\\n"}{end}'
and confirm that the region and zone are enabled on vsphere-csi
.
For more information on configuring vSphere CSI, see vSphere CSI Driver - Deployment with Topology.
Tanzu Kubernetes Grid can run workload clusters on multiple infrastructure provider accounts, for example to split cloud usage among different teams or apply different security profiles to production, staging, and development workloads.
To deploy workload clusters to an alternative vSphere account, different from the one used to deploy their management cluster, do the following:
Set the context of kubectl
to your management cluster:
kubectl config use-context MY-MGMT-CLUSTER@MY-MGMT-CLUSTER
Where MY-MGMT-CLUSTER
is the name of your management cluster.
Create a secret.yaml
file with the following contents:
apiVersion: v1
kind: Secret
metadata:
name: SECRET-NAME
namespace: CAPV-MANAGER-NAMESPACE
stringData:
username: VSPHERE-USERNAME
password: VSPHERE-PASSWORD
Where:
SECRET-NAME
is a name that you give to the client secret.CAPV-MANAGER-NAMESPACE
is namespace where the capv-manager
pod is running. Default: capv-system
.VSPHERE-USERNAME
and VSPHERE-PASSWORD
are login credentials that enable access to the alternative vSphere account.Use the file to create the Secret
object:
kubectl apply -f secret.yaml
Create an identity.yaml
file with the following contents:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereClusterIdentity
metadata:
name: EXAMPLE-IDENTITY
spec:
secretName: SECRET-NAME
allowedNamespaces:
selector:
matchLabels: {}
Where:
EXAMPLE-IDENTITY
is the name to use for the VSphereClusterIdentity
object.SECRET-NAME
is the name you gave to the client secret, above.Use the file to create the VsphereClusterIdentity
object:
kubectl apply -f identity.yaml
The management cluster can now deploy workload clusters to the alternative account. To deploy a workload cluster to the account:
Create a cluster manifest by running tanzu cluster create --dry-run
as described in Create Tanzu Kubernetes Cluster Manifest Files.
Edit the VSphereCluster
definition in the manifest to set the spec.identityRef.name
value for its VSphereClusterIdentity
object to the EXAMPLE-IDENTITY
you created above:
...
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereCluster
metadata:
name: new-workload-cluster
spec:
identityRef:
kind: VSphereClusterIdentity
name: EXAMPLE-IDENTITY
...
Run kubectl apply -f my-cluster-manifest.yaml
to create the workload cluster.
After you create the workload cluster, log in to vSphere with the alternative account credentials, and you should see it running.
For more information, see Identity Management in the Cluster API Provider vSphere repository.
To enable a workload cluster to use a datastore cluster instead of a single datastore set up a storage policy that targets all datastores within the datastore cluster as follows:
Create a tag and associate it with the relevant datastores:
Datastore
as an associable object type.Follow Create a VM Storage Policy for Tag-Based Placement to create a tag-based storage policy.
In the cluster configuration file:
VSPHERE_STORAGE_POLICY_ID
to the name of the storage policy created in the previous step.VSPHERE_DATASTORE
is not set. A VSPHERE_DATASTORE
setting would override the storage policy setting.To deploy a workload cluster with Windows worker nodes, create a custom machine image, deploy the cluster, and add any necessary overlays as follows:
Create a Windows machine image by following all the procedures in Windows Custom Machine Images.
Deploy the Windows cluster by following the procedure in any of the sections above, or by running:
tanzu cluster create WINDOWS-CLUSTER --file CLUSTER-CONFIG.yaml -v 9
Where: * WINDOWS-CLUSTER
is the name of the Windows cluster. * CLUSTER-CONFIG
is the name of the configuration file.
If your Windows cluster uses an an external identity provider, you will see an error similar to the following:
Reconcile failed: Error (see .status.usefulErrorMessage for details)
pinniped-supervisor pinniped-post-deploy-job - Waiting to complete (1 active, 0 3h failed, 0 succeeded)
Follow the procedure in Add Pinniped Overlay below.
(TKG v1.5.2 and earlier) If your Windows cluster uses NSX Advanced Load Balancer, you will see an error similar to the following:
Reconcile failed: Error (see .status.usefulErrorMessage for details)
avi-system ako-0 - Pending: Unschedulable (message: 0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {os: windows}, that the pod didn't tolerate.)
Follow the procedure in Add AKO Pod Overlay below.
If your Windows cluster uses an external identity provider, add a tolerations setting to the Pinniped secret by following these steps.
Connect the Kubernetes CLI to your management cluster by running:
kubectl config use-context MGMT-CLUSTER-NAME-admin@MGMT-CLUSTER-NAME
Where MGMT-CLUSTER-NAME
is the name of the management cluster.
Create a YAML file named pinniped-toleration-overlay.yaml
with the following configuration:
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind": "Job", "metadata": {"namespace": "pinniped-supervisor"}})
---
spec:
template:
spec:
#@overlay/match missing_ok=True
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
Update the P_OVERLAY
variable with the base64-encoded overlay. This command differs depending on the OS of your CLI:
export P_OVERLAY=`cat pinniped-toleration-overlay.yaml| base64`
export P_OVERLAY="cat pinniped-toleration-overlay.yaml| base64 -w 0
Patch the Pinniped add-on secret:
kubectl patch secret CLUSTER-NAME-pinniped-addon -p '{"data": {"overlays.yaml": "'$P_OVERLAY'"}}'
Where CLUSTER-NAME
is the name of your Windows workload cluster.
Wait about 15 seconds, then proceed to the next step.
Connect the Kubernetes CLI to your Windows workload cluster by running:
kubectl config use-context CLUSTER-NAME-admin@CLUSTER-NAME
Ensure the Pinniped app reconciled successfully:
kubectl get app pinniped -n tkg-system
The output is similar to the following:
NAME DESCRIPTION SINCE-DEPLOY AGE
pinniped Reconcile succeeded 32s 3h49m
Ensure the pinniped-post-deploy-job
ran successfully:
kubectl get job,po -n pinniped-supervisor
The output is similar to the following:
NAME COMPLETIONS DURATION AGE
job.batch/pinniped-post-deploy-job 1/1 7s 94m
NAME READY STATUS RESTARTS AGE
pod/pinniped-post-deploy-job--1-lnx46 0/1 Completed 0 94m
If you are running TKG v1.5.2 or a prior patch release of v1.5, and your Windows cluster uses NSX Advanced Load Balancer, add an overlay for the necessary Avi Kubernetes Operator (AKO) pod tolerations by following these steps.
The open-source component name for AKO is load-balancer-and-ingress-service
.
Connect the Kubernetes CLI to your management cluster by running:
kubectl config use-context MGMT-CLUSTER-NAME-admin@MGMT-CLUSTER-NAME
Where MGMT-CLUSTER-NAME
is the name of the management cluster.
Create a YAML file named ako-toleration-overlay.yaml
with the following configuration:
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind": "StatefulSet", "metadata": {"namespace": "avi-system"}})
---
spec:
template:
spec:
#@overlay/match missing_ok=True
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
Update the AKO_OVERLAY
variable with the base64-encoded overlay. This command differs depending on the OS of your CLI:
export AKO_OVERLAY=`cat ako-toleration-overlay.yaml| base64`
export AKO_OVERLAY=`cat ako-toleration-overlay.yaml| base64 -w 0`
Patch the AKO add-on secret:
kubectl patch secret CLUSTER-NAME-load-balancer-and-ingress-service-addon -p '{"data": {"overlays.yaml": "'$AKO_OVERLAY'"}}'
Where CLUSTER-NAME
is the name of your Windows workload cluster.
Wait about 15 seconds, then proceed to the next step.
Connect the Kubernetes CLI to your Windows workload cluster by running:
kubectl config use-context CLUSTER-NAME-admin@CLUSTER-NAME
Where CLUSTER-NAME
is the name of your Windows workload cluster.
Ensure the AKO (load-balancer-and-ingress-service
) app reconciled successfully:
kubectl get app load-balancer-and-ingress-service -n tkg-system
The output is similar to the following:
NAME DESCRIPTION SINCE-DEPLOY AGE
load-balancer-and-ingress-service Reconciling 13s 26m
Ensure the AKO pod is running:
kubectl get sts,po -n avi-system
The output is similar to the following:
NAME READY AGE
statefulset.apps/ako 1/1 79s
NAME READY STATUS RESTARTS AGE
pod/ako-0 1/1 Running 0 79s
To understand how to deploy an application on your workload cluster, expose it publicly, and access it online, see Tutorial: Example Application Deployment.
For Tanzu CLI commands and options to perform cluster lifecycle management operations, see Manage Clusters.
Advanced options that are applicable to all infrastructure providers are described in the following topics: