This reference lists all the variables that you can specify to provide configuration options to the Tanzu CLI.
To set these variables in a YAML configuration file, leave a space between the colon (:) and the variable value. For example:
CLUSTER_NAME: my-cluster
Line order in the configuration file does not matter. Options are presented here in alphabetical order.
This section lists variables that are common to all infrastructure providers. These variables may apply to management clusters, Tanzu Kubernetes clusters, or both. For more information, see Configure Basic Management Cluster Creation Information in Create a Management Cluster Configuration File. For the variables that are specific to workload clusters, see Deploy Tanzu Kubernetes Clusters.
Variable | Can be set in… | Description | |
---|---|---|---|
Management cluster YAML | Tanzu Kubernetes cluster YAML | ||
CLUSTER_CIDR |
✔ | ✔ | Optional, set if you want to override the default value. The CIDR range to use for pods. By default, this range is set to 100.96.0.0/11 . Change the default value only if the recommended range is unavailable. |
CLUSTER_NAME |
✔ | ✔ | This name must comply with DNS hostname requirements as outlined in RFC 952 and amended in RFC 1123, and must be 42 characters or less. For workload clusters, this setting is overridden by the CLUSTER_NAME argument passed to tanzu cluster create .For management clusters, if you do not specify CLUSTER_NAME , a unique name is generated. |
CLUSTER_PLAN |
✔ | ✔ | Required. Set to dev , prod , or a custom plan as exemplified in New Plan nginx .The dev plan deploys a cluster with a single control plane node. The prod plan deploys a highly available cluster with three control plane nodes. |
CNI |
✖ | ✔ | Optional, set if you want to override the default value. Do not override the default value for management clusters. Container network interface. By default, CNI is set to antrea . If you want to customize your Antrea configuration, see Antrea CNI Configuration below. For Tanzu Kubernetes clusters, you can set CNI to antrea , calico , or none . Setting none allows you to provide your own CNI. For more information about CNI options, see Deploy a Cluster with a Non-Default CNI. |
ENABLE_AUDIT_LOGGING |
✔ | ✔ | Optional, set if you want to override the default value. Audit logging for the Kubernetes API server. The default value is false . To enable audit logging, set the variable to true . Tanzu Kubernetes Grid writes these logs to /var/log/kubernetes/audit.log . For more information, see Audit Logging. |
ENABLE_AUTOSCALER |
✖ | ✔ | Optional, set if you want to override the default value. The default value is false . If set to true , you must include additional variables. |
ENABLE_CEIP_PARTICIPATION |
✔ | ✖ | Optional, set if you want to override the default value. The default value is true . false opts out of the VMware Customer Experience Improvement Program. You can also opt in or out of the program after deploying the management cluster. For information, see Opt In or Out of the VMware CEIP in Managing Participation in CEIP and Customer Experience Improvement Program (“CEIP”). |
ENABLE_DEFAULT_STORAGE_CLASS |
✖ | ✔ | Optional, set if you want to override the default value. The default value is true . For information about storage classes, see Create Persistent Volumes with Storage Classes. |
ENABLE_MHC |
✔ | ✔ | Optional, set if you want to override the default value. The default value is true . See Machine Health Checks below. |
IDENTITY_MANAGEMENT_TYPE |
✔ | ✔ | Required. Set either oidc or ldap . Additional OIDC or LDAP settings are required. For more information, see Identity Providers below. Set none to disable identity management. It is strongly recommended to enable identity management for production deployments.In workload cluster configuration files, replicate the variable setting from their management cluster configuration. |
INFRASTRUCTURE_PROVIDER |
✔ | ✔ | Required. Set to vsphere , aws , or azure . |
NAMESPACE |
✖ | ✔ | Optional, set if you want to override the default value. By default, Tanzu Kubernetes Grid deploys Tanzu Kubernetes clusters to the default namespace. |
SERVICE_CIDR |
✔ | ✔ | Optional, set if you want to override the default value. The CIDR range to use for the Kubernetes services. By default, this range is set to 100.64.0.0/13 . Change this value only if the recommended range is unavailable. |
TMC_REGISTRATION_URL |
✔ | ✖ | Optional. Set if you want to register your management cluster with Tanzu Mission Control. For more information, see Register Your Management Cluster with Tanzu Mission Control. |
If you set IDENTITY_MANAGEMENT_TYPE: oidc
, set the following variables to configure an OIDC identity provider. For more information, see Configure Identity Management in Create a Management Cluster Configuration File.
Tanzu Kubernetes Grid integrates with OIDC using Pinniped, as described in Enabling Identity Management in Tanzu Kubernetes Grid.
Variable | Can be set in… | Description | |
---|---|---|---|
Management cluster YAML | Tanzu Kubernetes cluster YAML | ||
IDENTITY_MANAGEMENT_TYPE |
✔ | ✖ | Enter oidc . |
CERT_DURATION |
✔ | ✖ | Optional. Default 2160h . Set this variable if you configure Pinniped and Dex to use self-signed certificates managed by certifcate-manager . |
CERT_RENEW_BEFORE |
✔ | ✖ | Optional. Default 360h . Set this variable if you configure Pinniped and Dex to use self-signed certificates managed by certifcate-manager . |
OIDC_IDENTITY_PROVIDER_CLIENT_ID |
✔ | ✖ | Required. The client_id value that you obtain from your OIDC provider. For example, if your provider is Okta, log in to Okta, create a Web application, and select the Client Credentials options in order to get a client_id and secret . |
OIDC_IDENTITY_PROVIDER_CLIENT_SECRET |
✔ | ✖ | Required. The Base64 secret value that you obtain from your OIDC provider. |
OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM |
✔ | ✖ | Required. The name of your groups claim. This is used to set a user’s group in the JSON Web Token (JWT) claim. The default value is groups . |
OIDC_IDENTITY_PROVIDER_ISSUER_URL |
✔ | ✖ | Required. The IP or DNS address of your OIDC server. |
OIDC_IDENTITY_PROVIDER_SCOPES |
✔ | ✖ | Required. A comma separated list of additional scopes to request in the token response. For example, "email,offline_access" . |
OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM |
✔ | ✖ | Required. The name of your username claim. This is used to set a user’s username in the JWT claim. Depending on your provider, enter claims such as user_name , email , or code . |
SUPERVISOR_ISSUER_URL |
✔ | ✖ | Do not modify. This variable is automatically updated in the configuration file when you run the tanzu cluster create command command. |
SUPERVISOR_ISSUER_CA_BUNDLE_DATA_B64 |
✔ | ✖ | Do not modify. This variable is automatically updated in the configuration file when you run the tanzu cluster create command command. |
If you set IDENTITY_MANAGEMENT_TYPE: ldap
, set the following variables to configure an LDAP identity provider. For more information, see Enabling Identity Management in Tanzu Kubernetes Grid and Configure Identity Management in Create a Management Cluster Configuration File.
Tanzu Kubernetes Grid integrates with LDAP using Pinniped, as described in Enabling Identity Management in Tanzu Kubernetes Grid.
Variable | Can be set in… | Description | |
---|---|---|---|
Management cluster YAML | Tanzu Kubernetes cluster YAML | ||
LDAP_BIND_DN |
✔ | ✖ | Optional. The DN for an application service account. The connector uses these credentials to search for users and groups. Not required if the LDAP server provides access for anonymous authentication. |
LDAP_BIND_PASSWORD |
✔ | ✖ | Optional. The password for an application service account, if LDAP_BIND_DN is set. |
LDAP_GROUP_SEARCH_BASE_DN |
✔ | ✖ | Optional. The point from which to start the LDAP search. For example, OU=Groups,OU=domain,DC=io . |
LDAP_GROUP_SEARCH_FILTER |
✔ | ✖ | Optional. An optional filter to be used by the LDAP search |
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE |
✔ | ✖ | Optional. The attribute of the group record that holds the user/member information. For example, member . |
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE |
✔ | ✖ | Optional. The LDAP attribute that holds the name of the group. For example, cn . |
LDAP_GROUP_SEARCH_USER_ATTRIBUTE |
✔ | ✖ | Optional. The attribute of the user record that is used as the value of the membership attribute of the group record. For example, DN , distinguishedName .The DN setting is case-sensitive and should always be capitalized; see Authentication Through LDAP > Configuration in the dex documentation. |
LDAP_HOST |
✔ | ✖ | Required. The IP or DNS address of your LDAP server. If the LDAP server is listening on the default port 636, which is the secured configuration, you do not need to specify the port. If the LDAP server is listening on a different port, provide the address and port of the LDAP server, in the form “host:port” . |
LDAP_ROOT_CA_DATA_B64 |
✔ | ✖ | Optional. If you are using an LDAPS endpoint, paste the base64 encoded contents of the LDAP server certificate. |
LDAP_USER_SEARCH_BASE_DN |
✔ | ✖ | Optional. The point from which to start the LDAP search. For example, OU=Users,OU=domain,DC=io . |
LDAP_USER_SEARCH_EMAIL_ATTRIBUTE |
✔ | ✖ | Optional. The LDAP attribute that holds the email address. For example, email , userPrincipalName . |
LDAP_USER_SEARCH_FILTER |
✔ | ✖ | Optional. An optional filter to be used by the LDAP search. |
LDAP_USER_SEARCH_ID_ATTRIBUTE |
✔ | ✖ | Optional. The LDAP attribute that contains the user ID. Similar to LDAP_USER_SEARCH_USERNAME . |
LDAP_USER_SEARCH_NAME_ATTRIBUTE |
✔ | ✖ | Optional. The LDAP attribute that holds the given name of the user. For example, givenName . This variable is not exposed in the installer interface. |
LDAP_USER_SEARCH_USERNAME |
✔ | ✖ | Optional. The LDAP attribute that contains the user ID. For example, uid , sAMAccountName . |
Configure the size and number of control plane and worker nodes, and the operating system that the node instances run. For more information, see Configure Node Settings in Create a Management Cluster Configuration File.
Variable | Can be set in… | Description | |
---|---|---|---|
Management cluster YAML | Tanzu Kubernetes cluster YAML | ||
CONTROL_PLANE_MACHINE_COUNT |
✖ | ✔ | Optional. Deploy a Tanzu Kubernetes cluster with more control plane nodes than the dev and prod plans define by default. The number of control plane nodes that you specify must be odd. |
CONTROLPLANE_SIZE |
✔ | ✔ | Optional. Size for control plane node VMs. Overrides the VSPHERE_CONTROL_PLANE_ parameters. See SIZE for possible values. |
NODE_STARTUP_TIMEOUT |
✔ | ✔ | Optional, set if you want to override the default value. The default value is 20m . |
OS_ARCH |
✔ | ✔ | Optional. Architecture for node VM OS. Default and only current choice is amd64 . |
OS_NAME |
✔ | ✔ | Optional. Node VM OS. Defaults to ubuntu for Ubuntu LTS. Can also be photon for Photon OS on vSphere or amazon for Amazon Linux on Amazon EC2. |
OS_VERSION |
✔ | ✔ | Optional. Version for OS_NAME OS, above. Defaults to 20.04 for Ubuntu. Can be 3 for Photon on vSphere and 2 for Amazon Linux on Amazon EC2. |
SIZE |
✔ | ✔ | Optional. Size for both control plane and worker node VMs. Overrides the CONTROLPLANE_SIZE and WORKER_SIZE parameters. For vSphere, set small , medium , large , or extra-large . For Amazon EC2, set an instance type, for example, t3.small . For Azure, set an instance type, for example, Standard_D2s_v3 . |
WORKER_MACHINE_COUNT |
✖ | ✔ | Optional. Deploy a Tanzu Kubernetes cluster with more worker nodes than the dev and prod plans define by default. |
WORKER_SIZE |
✔ | ✔ | Optional. Size for worker node VMs. Overrides the VSPHERE_WORKER_ parameters. See SIZE for possible values. |
Additonal variables to set if ENABLE_AUTOSCALER
is set to true
. For information about Cluster Autoscaler, Scale Tanzu Kubernetes Clusters.
Variable | Can be set in… | Description | |
---|---|---|---|
Management cluster YAML | Tanzu Kubernetes cluster YAML | ||
AUTOSCALER_MAX_NODES_TOTAL |
✖ | ✔ | Maximum total number of nodes in the cluster, worker plus control plane. Cluster Autoscaler does not attempt to scale your cluster beyond this limit. If set to 0 , Cluster Autoscaler makes scaling decisions based on the minimum and maximum values that you configure for each machine deployment. Default 0 . See below. |
AUTOSCALER_SCALE_DOWN_DELAY_AFTER_ADD |
✖ | ✔ | Amount of time that Cluster Autoscaler waits after a scale-up operation and then resumes scale-down scans. Default 10m . |
AUTOSCALER_SCALE_DOWN_DELAY_AFTER_DELETE |
✖ | ✔ | Amount of time that Cluster Autoscaler waits after deleting a node and then resumes scale-down scans. Default 10s . |
AUTOSCALER_SCALE_DOWN_DELAY_AFTER_FAILURE |
✖ | ✔ | Amount of time that Cluster Autoscaler waits after a scale-down failure and then resumes scale-down scans. Default 3m . |
AUTOSCALER_SCALE_DOWN_UNNEEDED_TIME |
✖ | ✔ | Amount of time that Cluster Autoscaler must wait before scaling down an eligible node. Default 10m . |
AUTOSCALER_MAX_NODE_PROVISION_TIME |
✖ | ✔ | Maximum amount of time Cluster Autoscaler waits for a node to be provisioned. Default 15m . |
AUTOSCALER_MIN_SIZE_0 |
✖ | ✔ | Required, all IaaSes. Minimum number of worker nodes. Cluster Autoscaler does not attempt to scale down the nodes below this limit. For prod clusters on Amazon EC2, AUTOSCALER_MIN_SIZE_0 sets the minimum number of worker nodes in the first AZ. If not set, defaults to the value of WORKER_MACHINE_COUNT for clusters with a single machine deployment or WORKER_MACHINE_COUNT_0 for clusters with multiple machine deployments. |
AUTOSCALER_MAX_SIZE_0 |
✖ | ✔ | Required, all IaaSes. Maximum number of worker nodes. Cluster Autoscaler does not attempt to scale up the nodes beyond this limit. For prod clusters on Amazon EC2, AUTOSCALER_MAX_SIZE_0 sets the maximum number of worker nodes in the first AZ. If not set, defaults to the value of WORKER_MACHINE_COUNT for clusters with a single machine deployment or WORKER_MACHINE_COUNT_0 for clusters with multiple machine deployments. |
AUTOSCALER_MIN_SIZE_1 |
✖ | ✔ | Required, use only for prod clusters on Amazon EC2. Minimum number of worker nodes in the second AZ. Cluster Autoscaler does not attempt to scale down the nodes below this limit. If not set, defaults to the value of WORKER_MACHINE_COUNT_1 . |
AUTOSCALER_MAX_SIZE_1 |
✖ | ✔ | Required, use only for prod clusters on Amazon EC2. Maximum number of worker nodes nodes in the second AZ. Cluster Autoscaler does not attempt to scale up the nodes beyond this limit. If not set, defaults to the value of WORKER_MACHINE_COUNT_1 . |
AUTOSCALER_MIN_SIZE_2 |
✖ | ✔ | Required, use only for prod clusters on Amazon EC2. Minimum number of worker nodes in the third AZ. Cluster Autoscaler does not attempt to scale down the nodes below this limit. If not set, defaults to the value of WORKER_MACHINE_COUNT_2 . |
AUTOSCALER_MAX_SIZE_2 |
✖ | ✔ | Required, use only for prod clusters on Amazon EC2. Maximum number of worker nodes in the third AZ. Cluster Autoscaler does not attempt to scale up the nodes beyond this limit. If not set, defaults to the value of WORKER_MACHINE_COUNT_2 . |
If your environment includes proxies, you can optionally configure Tanzu Kubernetes Grid to send outgoing HTTP and HTTPS traffic from kubelet
, containerd
, and the control plane to your proxies.
Tanzu Kubernetes Grid allows you to enable proxies for any of the following:
For more information, see Configure Proxies in Create a Management Cluster Configuration File.
Variable | Can be set in… | Description | |
---|---|---|---|
Management cluster YAML | Tanzu Kubernetes cluster YAML | ||
TKG_HTTP_PROXY |
✔ | ✔ | Optional, set if you want to configure a proxy; to disable your proxy configuration for an individual cluster, set this to
For example, |
TKG_HTTPS_PROXY |
✔ | ✔ | Optional, set if you want to configure a proxy. The URL of your HTTPS proxy. You can set this variable to the same value as TKG_HTTP_PROXY or provide a different value. The URL must start with http:// . If you set TKG_HTTPS_PROXY , you must also set TKG_HTTP_PROXY . |
TKG_NO_PROXY |
✔ | ✔ | Optional. One or more comma-separated network CIDRs or hostnames that must bypass the HTTP(S) proxy. For example, Internally, Tanzu Kubernetes Grid appends Important: If the cluster VMs need to communicate with external services and infrastructure endpoints in your Tanzu Kubernetes Grid environment, ensure that those endpoints are reachable by your proxies or add them to |
Additonal optional variables to set if CNI
is set to antrea
. For more information, see Configure Antrea CNI in Create a Management Cluster Configuration File.
Variable | Can be set in… | Description | |
---|---|---|---|
Management cluster YAML | Tanzu Kubernetes cluster YAML | ||
ANTREA_NO_SNAT |
✔ | ✔ | Optional. Default false . Set to true to disable Source Network Address Translation (SNAT). |
ANTREA_TRAFFIC_ENCAP_MODE |
✔ | ✔ | Optional. Default “encap” . Set to either noEncap , hybrid , or NetworkPolicyOnly . For information about using NoEncap or Hybrid traffic modes, see NoEncap and Hybrid Traffic Modes of Antrea in the Antrea documentation. |
ANTREA_PROXY |
✔ | ✔ | Optional. Default false . Enables or disables AntreaProxy , to replace kube-proxy for pod-to-ClusterIP Service traffic, for better performance and lower latency. Note that kube-proxy is still used for other types of Service traffic. |
ANTREA_POLICY |
✔ | ✔ | Optional. Default true . Enables or disables the Antrea-native policy API, which are policy CRDs specific to Antrea. Also, the implementation of Kubernetes Network Policies remains active when this variable is enabled. For information about using network policies, see Antrea Network Policy CRDs in the Antrea documentation. |
ANTREA_TRACEFLOW |
✔ | ✔ | Optional. Default false . Set to true to enable Traceflow. For information about using Traceflow, see the Traceflow User Guide in the Antrea documentation. |
If you want to configure machine health checks for management and Tanzu Kubernetes clusters, set the following variables. For more information, see Configure Machine Health Checks in Create a Management Cluster Configuration File. For information about how to perform Machine Health Check operations after cluster deployment, see Configure Machine Health Checks for Tanzu Kubernetes Clusters.
Variable | Can be set in… | Description | |
---|---|---|---|
Management cluster YAML | Tanzu Kubernetes cluster YAML | ||
ENABLE_MHC |
✔ | ✔ | Optional, set if you want to override the default value. The default value is true . This variable enables or disables the MachineHealthCheck controller, which provides node health monitoring and node auto-repair for worker nodes in management and Tanzu Kubernetes clusters. You can also enable or disable MachineHealthCheck after deployment by using the CLI. For instructions, see Configure Machine Health Checks for Tanzu Kubernetes Clusters. |
MHC_UNKNOWN_STATUS_TIMEOUT |
✔ | ✔ | Optional, set if you want to override the default value. The default value is 5m . By default, if the Ready condition of a node remains Unknown for longer than 5m , MachineHealthCheck considers the machine unhealthy and recreates it. |
MHC_FALSE_STATUS_TIMEOUT |
✔ | ✔ | Optional, set if you want to override the default value. The default value is 5m . By default, if the Ready condition of a node remains False for longer than 5m , MachineHealthCheck considers the machine unhealthy and recreates it. |
If you deploy Tanzu Kubernetes Grid management clusters and Kubernetes clusters in environments that are not connected to the Internet, you need to set up a private image repository within your firewall and populate it with the Tanzu Kubernetes Grid images. For information about setting up a private image repository, see Deploying Tanzu Kubernetes Grid in an Internet-Restricted Environment and Deploy Harbor Registry as a Shared Service.
Variable | Can be set in… | Description | |
---|---|---|---|
Management cluster YAML | Tanzu Kubernetes cluster YAML | ||
TKG_CUSTOM_IMAGE_REPOSITORY |
✔ | ✔ | Required if you deploy Tanzu Kubernetes Grid in an Internet-restricted environment. Provide the IP address or FQDN of your private registry. For example, custom-image-repository.io/yourproject . |
TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY |
✔ | ✔ | Optional. Set to true if your private image registry uses a self-signed certificate and you do not use TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE . Because the Tanzu connectivity webhook injects the Harbor CA certificate into cluster nodes, TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY should always be set to false when using Harbor as a shared service. |
TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE |
✔ | ✔ | Optional. Set if your private image registry uses a self-signed certificate. Provide the CA certificate in base64 encoded format, for example TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: “LS0t[…]tLS0tLQ==”" . |
The options in the table below are the minimum options that you specify in the cluster configuration file when deploying Tanzu Kubernetes clusters to vSphere. Most of these options are the same for both the Tanzu Kubernetes cluster and the management cluster that you use to deploy it.
For more information about the configuration files for vSphere, see Management Cluster Configuration for vSphere and Deploy Tanzu Kubernetes Clusters to vSphere.
Variable | Can be set in… | Description | |
---|---|---|---|
Management cluster YAML | Tanzu Kubernetes cluster YAML | ||
DEPLOY_TKG_ON_VSPHERE7 |
✔ | ✔ | Optional. If deploying to vSphere 7, set to true to skip the prompt about deployment on vSphere 7, or false . See Management Clusters on vSphere with Tanzu. |
ENABLE_TKGS_ON_VSPHERE7 |
✔ | ✔ | Optional if deploying to vSphere 7. Set to true to be redirected to the vSphere with Tanzu enablement UI page, or false . See Management Clusters on vSphere with Tanzu. |
VIP_NETWORK_INTERFACE |
✔ | ✔ | Optional. Set to eth0 , eth1 , etc. Network interface name, for example an Ethernet interface. Defaults to eth0 . |
VSPHERE_CONTROL_PLANE_DISK_GIB |
✔ | ✔ | Optional. The size in gigabytes of the disk for the control plane node VMs. Include the quotes (““ ). For example, "30" . |
VSPHERE_CONTROL_PLANE_ENDPOINT |
✔ | ✔ | Required. Static virtual IP address for API requests to the Tanzu Kubernetes cluster. If you mapped a fully qualified domain name (FQDN) to the VIP address, you can specify the FQDN instead of the VIP address. |
VSPHERE_CONTROL_PLANE_ENDPOINT_PORT |
✔ | ✖ | Optional, set if you want to override the Kubernetes API server port for deployments on vSphere with NSX Advanced Load Balancer. The default port is 6443 . |
VSPHERE_CONTROL_PLANE_MEM_MIB |
✔ | ✔ | Optional. The amount of memory in megabytes for the control plane node VMs. Include the quotes (”” ). For example, "2048" . |
VSPHERE_CONTROL_PLANE_NUM_CPUS |
✔ | ✔ | Optional. The number of CPUs for the control plane node VMs. Include the quotes (““ ). Must be at least 2. For example, "2" . |
VSPHERE_DATACENTER |
✔ | ✔ | Required. The name of the datacenter in which to deploy the cluster, as it appears in the vSphere inventory. For example, /MY-DATACENTER . |
VSPHERE_DATASTORE |
✔ | ✔ | Required. The name of the vSphere datastore for the cluster to use, as it appears in the vSphere inventory. For example, /MY-DATACENTER/datastore/MyDatastore . |
VSPHERE_FOLDER |
✔ | ✔ | Required. The name of an existing VM folder in which to place Tanzu Kubernetes Grid VMs, as it appears in the vSphere inventory. For example, if you created a folder named TKG , the path is /MY-DATACENTER/vm/TKG . |
VSPHERE_INSECURE |
✔ | ✔ | Optional. Set to true or false to bypass thumbprint verification. If false, set VSPHERE_TLS_THUMBPRINT . |
VSPHERE_NETWORK |
✔ | ✔ | Required. The name of an existing vSphere network to use as the Kubernetes service network, as it appears in the vSphere inventory. For example, VM Network . |
VSPHERE_PASSWORD |
✔ | ✔ | Required. The password for the vSphere user account. This value is base64-encoded when you run tanzu cluster create . |
VSPHERE_RESOURCE_POOL |
✔ | ✔ | Required. The name of an existing resource pool in which to place this Tanzu Kubernetes Grid instance, as it appears in the vSphere inventory. To use the root resource pool for a cluster, enter the full path, for example for a cluster named cluster0 in datacenter MY-DATACENTER , the full path is /MY-DATACENTER/host/cluster0/Resources . |
VSPHERE_SERVER |
✔ | ✔ | Required. The IP address or FQDN of the vCenter Server instance on which to deploy the Tanzu Kubernetes cluster. |
VSPHERE_SSH_AUTHORIZED_KEY |
✔ | ✔ | Required. Paste in the contents of the SSH public key that you created in Deploy a Management Cluster to vSphere. For example, "ssh-rsa NzaC1yc2EA […] hnng2OYYSl+8ZyNz3fmRGX8uPYqw== email@example.com". |
VSPHERE_STORAGE_POLICY_ID |
✔ | ✔ | Optional. The name of a VM storage policy for the management cluster, as it appears in Policies and Profiles > VM Storage Policies. If VSPHERE_DATASTORE is set, the storage policy must include it. Otherwise, the cluster creation process chooses a datastore that compatible with the policy. |
VSPHERE_TEMPLATE |
✖ | ✔ | Optional. Specify the path to an OVA file if you are using multiple custom OVA images for the same Kubernetes version, in the format /MY-DC/vm/MY-FOLDER-PATH/MY-IMAGE . For more information, see Deploy a Cluster with a Custom OVA Image. |
VSPHERE_TLS_THUMBPRINT |
✔ | ✔ | Required if VSPHERE_INSECURE is false . The thumbprint of the vCenter Server certificate. For information about how to obtain the vCenter Server certificate thumbprint, see Obtain vSphere Certificate Thumbprints. This value can be skipped if user wants to use insecure connection by setting VSPHERE_INSECURE to true . |
VSPHERE_USERNAME |
✔ | ✔ | Required. A vSphere user account, including the domain name, with the required privileges for Tanzu Kubernetes Grid operation. For example, tkg-user@vsphere.local . |
VSPHERE_WORKER_DISK_GIB |
✔ | ✔ | Optional. The size in gigabytes of the disk for the worker node VMs. Include the quotes (”” ). For example, "50" . |
VSPHERE_WORKER_MEM_MIB |
✔ | ✔ | Optional. The amount of memory in megabytes for the worker node VMs. Include the quotes (““ ). For example, "4096" . |
VSPHERE_WORKER_NUM_CPUS |
✔ | ✔ | Optional. The number of CPUs for the worker node VMs. Include the quotes (”” ). Must be at least 2. For example, "2" . |
For information about how to deploy NSX Advanced Load Balancer, see Install VMware NSX Advanced Load Balancer on a vSphere Distributed Switch.
Variable | Can be set in… | Description | |
---|---|---|---|
Management cluster YAML | Tanzu Kubernetes cluster YAML | ||
AVI_ENABLE |
✔ | ✖ | Optional. Set to true or false . Enables NSX Advanced Load Balancer. If true , you must set the required variables listed in NSX Advanced Load Balancer below. Defaults to false . |
AVI_ADMIN_CREDENTIAL_NAME |
✔ | ✖ | Optional. The name of the Kubernetes Secret that contains the NSX Advanced Loader Balancer controller admin username and password. Default avi-controller-credentials . |
AVI_AKO_IMAGE_PULL_POLICY |
✔ | ✖ | Optional. Default IfNotPresent . |
AVI_CA_DATA_B64 |
✔ | ✖ | Required. The contents of the Controller Certificate Authority that is used to sign the Controller certificate. It must be base64 encoded. |
AVI_CA_NAME |
✔ | ✖ | Optional. The name of the Kubernetes Secret that holds the NSX Advanced Loader Balancer Controller Certificate Authority. Default avi-controller-ca . |
AVI_CLOUD_NAME |
✔ | ✖ | Required. The cloud that you created in your NSX Advanced Load Balancer deployment. For example, Default-Cloud . |
AVI_CONTROLLER |
✔ | ✖ | Required. The IP or hostname of the NSX Advanced Loader Balancer controller. |
AVI_DATA_NETWORK |
✔ | ✖ | Required. The name of the Network on which the Load Balancer floating IP subnet or IP Pool is configured. This Network must be present in the same vCenter Server instance as the Kubernetes network that Tanzu Kubernetes Grid uses, that you specify in the SERVICE_CIDR variable. This allows NSX Advanced Load Balancer to discover the Kubernetes network in vCenter Server and to deploy and configure Service Engines. |
AVI_DATA_NETWORK_CIDR |
✔ | ✖ | Required.The CIDR of the subnet to use for the load balancer VIP. This comes from one of the VIP network’s configured subnets. You can see the subnet CIDR for a particular Network in the Infrastructure - Networks view of the NSX Advanced Load Balancer interface. |
AVI_DISABLE_INGRESS_CLASS |
✔ | ✖ | Optional. Disable Ingress Class. Default false . |
AVI_INGRESS_DEFAULT_INGRESS_CONTROLLER |
✔ | ✖ | Optional. Use AKO as the default Ingress Controller. Default false . |
AVI_INGRESS_SERVICE_TYPE |
✔ | ✖ | Optional. Specifies whether the AKO functions in ClusterIP mode or NodePort mode. Defaults to NodePort . |
AVI_INGRESS_SHARD_VS_SIZE |
✔ | ✖ | Optional. AKO uses a sharding logic for Layer 7 ingress objects. A sharded VS involves hosting multiple insecure or secure ingresses hosted by one virtual IP or VIP. Set to LARGE , MEDIUM , or SMALL . Default SMALL . Use this to control the layer 7 VS numbers. This applies to both secure/insecure VSes but does not apply for passthrough. |
AVI_LABELS |
✔ | ✖ | Optional. Optional labels in the format key: value . When set, NSX Advanced Load Balancer is enabled only on workload clusters that have this label. For example team: tkg . Caution: Do not set AVI_LABELS in this version of Tanzu Kubernetes Grid. |
AVI_NAMESPACE |
✔ | ✖ | Optional. The namespace for AKO operator. Default “tkg-system-networking” . |
AVI_PASSWORD |
✔ | ✖ | Required. The password that you set for the Controller admin when you deployed it. |
AVI_SERVICE_ENGINE_GROUP |
✔ | ✖ | Required. Name of the Service Engine Group. For example, Default-Group . |
AVI_USERNAME |
✔ | ✖ | Required. The admin username that you set for the Controller host when you deployed it. |
These variables configure routable-IP address workload pods, as described in Deploy a Cluster with Routable-IP Pods. All variables are strings in double-quotes, for example "true"
.
Variable | Can be set in… | Description | |
---|---|---|---|
Management cluster YAML | Tanzu Kubernetes cluster YAML | ||
NSXT_POD_ROUTING_ENABLED |
✖ | ✔ | Optional. “true” enables NSX-T routable pods with the variables below. Default is “false” . See Deploy a Cluster with Routable-IP Pods. |
NSXT_MANAGER_HOST |
✖ | ✔ | Required if NSXT_POD_ROUTING_ENABLED= “true” .IP address of NSX-T Manager. |
NSXT_ROUTER_PATH |
✖ | ✔ | Required if NSXT_POD_ROUTING_ENABLED= “true” . T1 router path shown in NSX-T Manager. |
For username/password authentication to NSX-T: | |||
NSXT_USERNAME |
✖ | ✔ | Username for logging in to NSX-T Manager. |
NSXT_PASSWORD |
✖ | ✔ | Password for logging in to NSX-T Manager. |
For authenticating to NSX-T using credentials and storing them in a Kubernetes secret (also set NSXT_USERNAME and NSXT_PASSWORD above): |
|||
NSXT_SECRET_NAMESPACE |
✖ | ✔ | The namespace with the secret containing NSX-T username and password. Default is “kube-system” . |
NSXT_SECRET_NAME |
✖ | ✔ | The name of the secret containing NSX-T username and password. Default is “cloud-provider-vsphere-nsxt-credentials” . |
For certificate authentication to NSX-T: | |||
NSXT_ALLOW_UNVERIFIED_SSL |
✖ | ✔ | Set this to “true” if NSX-T uses a self-signed certificate. Default is false . |
NSXT_ROOT_CA_DATA_B64 |
✖ | ✔ | Required if NSXT_ALLOW_UNVERIFIED_SSL= “false” .Base64-encoded Certificate Authority root certificate string that NSX-T uses for LDAP authentication. |
NSXT_CLIENT_CERT_KEY_DATA |
✖ | ✔ | Base64-encoded cert key file string for local client certificate. |
NSXT_CLIENT_CERT_DATA |
✖ | ✔ | Base64-encoded cert file string for local client certificate. |
For remote authentication to NSX-T with VMware Identity Manager, on VMware Cloud (VMC): | |||
NSXT_REMOTE_AUTH |
✖ | ✔ | Set this to “true” for remote authentication to NSX-T with VMware Identity Manager, on VMware Cloud (VMC). Default is “false” . |
NSXT_VMC_AUTH_HOST |
✖ | ✔ | VMC authentication host. Default is empty. |
NSXT_VMC_ACCESS_TOKEN |
✖ | ✔ | VMC authentication access token. Default is empty. |
The variables in the table below are the options that you specify in the cluster configuration file when deploying Tanzu Kubernetes clusters to Amazon EC2. Many of these options are the same for both the Tanzu Kubernetes cluster and the management cluster that you use to deploy it.
For more information about the configuration files for Amazon EC2, see Management Cluster Configuration for Amazon EC2 and Deploy Tanzu Kubernetes Clusters to Amazon EC2.
Variable | Can be set in… | Description | |
---|---|---|---|
Management cluster YAML | Tanzu Kubernetes cluster YAML | ||
AWS_ACCESS_KEY_ID |
✔ | ✔ | Required. The access key ID for your AWS account. Alternatively, you can specify account credentials as a local environment variables or in your AWS default credential provider chain. |
AWS_NODE_AZ |
✔ | ✔ | Required. The name of the AWS availability zone in your chosen region that you want use as the availability zone for this management cluster. Availability zone names are the same as the AWS region name, with a single lower-case letter suffix, such as a , b , c . For example, us-west-2a . To deploy a prod management cluster with three control plane nodes, you must also set AWS_NODE_AZ_1 and AWS_NODE_AZ_2 . The letter suffix in each of these availability zones must be unique. For example, us-west-2a , us-west-2b , and us-west-2c . |
AWS_NODE_AZ_1 and AWS_NODE_AZ_2 |
✔ | ✔ | Optional. Set these variables if you want to deploy a prod management cluster with three control plane nodes. Both availability zones must be in the same region as AWS_NODE_AZ . See AWS_NODE_AZ above for more information. For example, us-west-2a , ap-northeast-2b , etc. |
AWS_PRIVATE_NODE_CIDR |
✔ | ✔ | Optional. Set this variable if you set AWS_VPC_CIDR . If the recommended range of 10.0.0.0/24 is not available, enter a different IP range in CIDR format for private nodes to use. When Tanzu Kubernetes Grid deploys your management cluster, it creates this subnetwork in AWS_NODE_AZ . To deploy a prod management cluster with three control plane nodes, you must also set AWS_PRIVATE_NODE_CIDR_1 and AWS_PRIVATE_NODE_CIDR_2 . For example, 10.0.0.0/24 |
AWS_PRIVATE_NODE_CIDR_1 |
✔ | ✔ | Optional. If the recommended range of 10.0.2.0/24 is not available, enter a different IP range in CIDR format. When Tanzu Kubernetes Grid deploys your management cluster, it creates this subnetwork in AWS_NODE_AZ_1 . See AWS_PRIVATE_NODE_CIDR above for more information. |
AWS_PRIVATE_NODE_CIDR_2 |
✔ | ✔ | Optional. If the recommended range of 10.0.4.0/24 is not available, enter a different IP range in CIDR format. When Tanzu Kubernetes Grid deploys your management cluster, it creates this subnetwork in AWS_NODE_AZ_2 . See AWS_PRIVATE_NODE_CIDR above for more information. |
AWS_PUBLIC_NODE_CIDR |
✔ | ✔ | Optional. Set this variable if you set AWS_VPC_CIDR . If the recommended range of 10.0.1.0/24 is not available, enter a different IP range in CIDR format for public nodes to use. When Tanzu Kubernetes Grid deploys your management cluster, it creates this subnetwork in AWS_NODE_AZ . To deploy a prod management cluster with three control plane nodes, you must also set AWS_PUBLIC_NODE_CIDR_1 and AWS_PUBLIC_NODE_CIDR_2 . |
AWS_PUBLIC_NODE_CIDR_1 |
✔ | ✔ | Optional. If the recommended range of 10.0.3.0/24 is not available, enter a different IP range in CIDR format. When Tanzu Kubernetes Grid deploys your management cluster, it creates this subnetwork in AWS_NODE_AZ_1 . See AWS_PUBLIC_NODE_CIDR above for more information. |
AWS_PUBLIC_NODE_CIDR_2 |
✔ | ✔ | Optional. If the recommended range of 10.0.5.0/24 is not available, enter a different IP range in CIDR format. When Tanzu Kubernetes Grid deploys your management cluster, it creates this subnetwork in AWS_NODE_AZ_2 . See AWS_PUBLIC_NODE_CIDR above for more information. |
AWS_PRIVATE_SUBNET_ID |
✔ | ✔ | Optional. If you set AWS_VPC_ID to use an existing VPC, enter the ID of a private subnet that already exists in AWS_NODE_AZ . This setting is optional. If you do not set it, tanzu management-cluster create identifies the private subnet automatically. To deploy a prod management cluster with three control plane nodes, you must also set AWS_PRIVATE_SUBNET_ID_1 and AWS_PRIVATE_SUBNET_ID_2 . |
AWS_PRIVATE_SUBNET_ID_1 |
✔ | ✔ | Optional. The ID of a private subnet that exists in AWS_NODE_AZ_1 . If you do not set this variable, tanzu management-cluster create identifies the private subnet automatically. See AWS_PRIVATE_SUBNET_ID above for more information. |
AWS_PRIVATE_SUBNET_ID_2 |
✔ | ✔ | Optional. The ID of a private subnet that exists in AWS_NODE_AZ_2 . If you do not set this variable, tanzu management-cluster create identifies the private subnet automatically. See AWS_PRIVATE_SUBNET_ID above for more information. |
AWS_PUBLIC_SUBNET_ID |
✔ | ✔ | Optional. If you set AWS_VPC_ID to use an existing VPC, enter the ID of a public subnet that already exists in AWS_NODE_AZ . This setting is optional. If you do not set it, tanzu management-cluster create identifies the public subnet automatically. To deploy a prod management cluster with three control plane nodes, you must also set AWS_PUBLIC_SUBNET_ID_1 and AWS_PUBLIC_SUBNET_ID_2 . |
AWS_PUBLIC_SUBNET_ID_1 |
✔ | ✔ | Optional. The ID of a public subnet that exists in AWS_NODE_AZ_1 . If you do not set this variable, tanzu management-cluster create identifies the public subnet automatically. See AWS_PUBLIC_SUBNET_ID above for more information. |
AWS_PUBLIC_SUBNET_ID_2 |
✔ | ✔ | Optional. The ID of a public subnet that exists in AWS_NODE_AZ_2 . If you do not set this variable, tanzu management-cluster create identifies the public subnet automatically. See AWS_PUBLIC_SUBNET_ID above for more information. |
AWS_REGION |
✔ | ✔ | Required. The name of the AWS region in which to deploy the cluster. For example, us-west-2 . You can also specify the us-gov-east and us-gov-west regions in AWS GovCloud. If you have already set a different region as an environment variable, for example, in Deploy Management Clusters to Amazon EC2, you must unset that environment variable. For example, us-west-2 , ap-northeast-2 , etc. |
AWS_SECRET_ACCESS_KEY |
✔ | ✔ | Required. The secret access key for your AWS account. Alternatively, you can specify account credentials as an environment variable with the same name or in your AWS default credential provider chain. |
AWS_SESSION_TOKEN |
✔ | ✔ | Optional. Provide the AWS session token granted to your account if you are required to use a temporary access key. For more information about using temporary access keys, see Understanding and getting your AWS credentials. provide the session token for your AWS account. Alternatively, you can specify account credentials as a local environment variables or in your AWS default credential provider chain. |
AWS_SSH_KEY_NAME |
✔ | ✔ | Required. The name of the SSH private key that you registered with your AWS account. |
AWS_VPC_ID |
✔ | ✔ | Optional. To use a VPC that already exists in your selected AWS region, enter the ID of the VPC and then set AWS_PUBLIC_SUBNET_ID and AWS_PRIVATE_SUBNET_ID . Set either AWS_VPC_ID or AWS_VPC_CIDR , but not both. |
AWS_VPC_CIDR |
✔ | ✔ | Optional. 10.0.0.0/16 . If you want Tanzu Kubernetes Grid to create a new VPC in the selected region, set the AWS_VPC_CIDR , AWS_PUBLIC_NODE_CIDR , and AWS_PRIVATE_NODE_CIDR variables. If the recommended range of 10.0.0.0/16 is not available, enter a different IP range in CIDR format in AWS_VPC_CIDR for the management cluster to use. Set either AWS_VPC_CIDR or AWS_VPC_ID , but not both. |
BASTION_HOST_ENABLED |
✔ | ✔ | Optional. By default this option is set to "true" in the global Tanzu Kubernetes Grid configuration. Specify "true" to deploy an AWS bastion host or "false" to reuse an existing bastion host. If no bastion host exists in your availability zone(s) and you set AWS_VPC_ID to use an existing VPC, set BASTION_HOST_ENABLED to "true" . |
CONTROL_PLANE_MACHINE_TYPE |
✔ | ✔ | Required if cloud-agnostic SIZE or CONTROLPLANE_SIZE are not set. The Amazon EC2 instance type to use for cluster control plane nodes, for example t3.small or m5.large . |
NODE_MACHINE_TYPE |
✔ | ✔ | Required if cloud-agnostic SIZE or WORKER_SIZE are not set. The Amazon EC2, instance type to use for cluster worker nodes, for example t3.small or m5.large . |
The variables in the table below are the options that you specify in the cluster configuration file when deploying Tanzu Kubernetes clusters to Azure. Many of these options are the same for both the Tanzu Kubernetes cluster and the management cluster that you use to deploy it.
For more information about the configuration files for Azure, see Management Cluster Configuration for Azure and Deploy Tanzu Kubernetes Clusters to Azure.
Variable | Can be set in… | Description | |
---|---|---|---|
Management cluster YAML | Tanzu Kubernetes cluster YAML | ||
AZURE_CLIENT_ID |
✔ | ✔ | Required. The client ID of the app for Tanzu Kubernetes Grid that you registered with Azure. |
AZURE_CLIENT_SECRET |
✔ | ✔ | Required. Your Azure client secret from Register a Tanzu Kubernetes Grid App on Azure. |
AZURE_CUSTOM_TAGS |
✔ | ✔ | Optional. Comma-separated list of tags to apply to Azure resources created for the cluster. A tag is a key-value pair, for example, “foo=bar, plan=prod” . For more information about tagging Azure resources, see Use tags to organize your Azure resources and management hierarchy and Tag support for Azure resources in the Microsoft Azure documentation. |
AZURE_ENVIRONMENT |
✔ | ✔ | Optional, set if you want to override the default value. The default value is AzurePublicCloud . Supported clouds are AzurePublicCloud , AzureChinaCloud , AzureGermanCloud , AzureUSGovernmentCloud . |
AZURE_LOCATION |
✔ | ✔ | Required. The name of the Azure region in which to deploy the cluster. For example, eastus . |
AZURE_RESOURCE_GROUP |
✔ | ✔ | Optional. The name of the Azure resource group that you want to use for the cluster. Defaults to the CLUSTER_NAME . Must be unique to each cluster. AZURE_RESOURCE_GROUP and AZURE_VNET_RESOURCE_GROUP are the same by default. |
AZURE_SSH_PUBLIC_KEY_B64 |
✔ | ✔ | Required. Your SSH public key, created in Deploy a Management Cluster to Microsoft Azure, converted into base64 with newlines removed. For example, c3NoLXJzYSBB […] vdGFsLmlv . |
AZURE_SUBSCRIPTION_ID |
✔ | ✔ | Required. The subscription ID of your Azure subscription. |
AZURE_TENANT_ID |
✔ | ✔ | Required. The tenant ID of your Azure account. |
Networking | |||
AZURE_ENABLE_ACCELERATED_NETWORKING |
✔ | ✔ | Reserved for future use.. Set to true to enable Azure accelerated networking on VMs based on compatible Azure Tanzu Kubernetes release (TKr) images. Currently, Azure TKr do not support Azure accelerated networking. |
AZURE_ENABLE_PRIVATE_CLUSTER |
✔ | ✔ | Optional. Set this to true to configure the cluster as private and use an Azure Internal Load Balancer (ILB) for its incoming traffic. For more information, see Azure Private Clusters. |
AZURE_FRONTEND_PRIVATE_IP |
✔ | ✔ | Optional. Set this if AZURE_ENABLE_PRIVATE_CLUSTER is true and you want to override the default internal load balancer address of 10.0.0.100 . |
AZURE_VNET_CIDR |
✔ | ✔ | Optional, set if you want to deploy the cluster to a new VNET and subnets and override the default values. By default, AZURE_VNET_CIDR is set to 10.0.0.0/16 , AZURE_CONTROL_PLANE_SUBNET_CIDR to 10.0.0.0/24 , and AZURE_NODE_SUBNET_CIDR to 10.0.1.0/24 . |
AZURE_CONTROL_PLANE_SUBNET_CIDR |
|||
AZURE_NODE_SUBNET_CIDR |
|||
AZURE_VNET_NAME |
✔ | ✔ | Optional, set if you want to deploy the cluster to an existing VNET and subnets or assign names to a new VNET and subnets. |
AZURE_CONTROL_PLANE_SUBNET_NAME |
|||
AZURE_NODE_SUBNET_NAME |
|||
AZURE_VNET_RESOURCE_GROUP |
✔ | ✔ | Optional, set if you want to override the default value. The default value is set to the value of AZURE_RESOURCE_GROUP . |
Control Plane VMs | |||
AZURE_CONTROL_PLANE_DATA_DISK_SIZE_GIB |
✔ | ✔ | Optional. Size of data disk and OS disk, as described in Azure documentation Disk roles, for control plane VMs, in GB. Examples: 128 , 256 . Control plane nodes are always provisioned with a data disk. |
AZURE_CONTROL_PLANE_OS_DISK_SIZE_GIB |
|||
AZURE_CONTROL_PLANE_MACHINE_TYPE |
✔ | ✔ | Optional, set if you want to override the default value. An Azure VM size for the control plane node VMs, chosen to fit expected workloads. The default value is Standard_D2s_v3 . The minimum requirement for Azure instance types is 2 CPUs and 8 GB memory. For possible values, see the Tanzu Kubernetes Grid installer interface. |
AZURE_CONTROL_PLANE_OS_DISK_STORAGE_ACCOUNT_TYPE |
✔ | ✔ | Optional. Type of Azure storage account for control plane VM disks. Example: Premium_LRS . |
Worker Node VMs | |||
AZURE_ENABLE_NODE_DATA_DISK |
✔ | ✔ | Optional. Set to true to provision a data disk for each worker node VM, as described in Azure documentation Disk roles. Default: false . |
AZURE_NODE_DATA_DISK_SIZE_GIB |
✔ | ✔ | Optional. Set this variable if AZURE_ENABLE_NODE_DATA_DISK is true . Size of data disk, as described in Azure documentation Disk roles, for worker VMs, in GB. Examples: 128 , 256 . |
AZURE_NODE_OS_DISK_SIZE_GIB |
✔ | ✔ | Optional. Size of OS disk, as described in Azure documentation Disk roles, for worker VMs, in GB. Examples: 128 , 256 . |
AZURE_NODE_MACHINE_TYPE |
✔ | ✔ | Optional, set if you want to override the default value. An Azure VM size for the worker node VMs, chosen to fit expected workloads. The default value is Standard_D2s_v3 . For possible values, see the Tanzu Kubernetes Grid installer interface. |
AZURE_NODE_OS_DISK_STORAGE_ACCOUNT_TYPE |
✔ | ✔ | Optional. Set this variable if AZURE_ENABLE_NODE_DATA_DISK is true . Type of Azure storage account for worker VM disks. Example: Premium_LRS . |