Configuration File Variable Reference

This reference lists all the variables that you can specify to provide cluster configuration options to the Tanzu CLI.

To set these variables in a YAML configuration file, leave a space between the colon (:) and the variable value. For example:

CLUSTER_NAME: my-cluster

Line order in the configuration file does not matter. Options are presented here in alphabetical order. Also, see the Configuration Value Precedence section below.

Common Variables for All Target Platforms

This section lists variables that are common to all target platforms. These variables may apply to management clusters, workload clusters, or both. For more information, see Configure Basic Management Cluster Creation Information in Create a Management Cluster Configuration File.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
APISERVER_EVENT_RATE_LIMIT_CONF_BASE64 Class-based clusters only. Enables and configures an EventRateLimit admission controller to moderate traffic to the Kubernetes API server. Set this property by creating an EventRateLimit configuration file config.yaml as described in the Kubernetes documentation and then converting it into a string value by running base64 -w 0 config.yaml
CAPBK_BOOTSTRAP_TOKEN_TTL Optional; defaults to 15m, 15 minutes. The TTL value of the bootstrap token used by kubeadm during cluster init operations.
CLUSTER_API_SERVER_PORT Optional; defaults to 6443. This variable is ignored if you enable NSX Advanced Load Balancer (vSphere); to override the default Kubernetes API server port for deployments with NSX Advanced Load Balancer, set VSPHERE_CONTROL_PLANE_ENDPOINT_PORT.
CLUSTER_CIDR Optional; defaults to 100.96.0.0/11. The CIDR range to use for pods. Change the default value only if the default range is unavailable.
CLUSTER_NAME This name must comply with DNS hostname requirements as outlined in RFC 952 and amended in RFC 1123, and must be 42 characters or less.
For workload clusters, this setting is overridden by the CLUSTER_NAME argument passed to tanzu cluster create.
For management clusters, if you do not specify CLUSTER_NAME, a unique name is generated.
CLUSTER_PLAN Required. Set to dev, prod, or a custom plan as exemplified in New Plan nginx.
The dev plan deploys a cluster with a single control plane node. The prod plan deploys a highly available cluster with three control plane nodes.
CNI Optional; defaults to antrea. Container network interface. Do not override the default value for management clusters. For workload clusters, you can set CNI to antrea, calico, or none. The calico option is not supported on Windows. For more information, see Deploy a Cluster with a Non-Default CNI. To customize your Antrea configuration, see Antrea CNI Configuration below.
CONTROLPLANE_CERTIFICATE_ROTATION_ENABLED Optional; defaults to false. Set to true if you want control plane node VM certificates to auto-renew before they expire. For more information, see Control Plane Node Certificate Auto-Renewal.
CONTROLPLANE_CERTIFICATE_ROTATION_DAYS_BEFORE Optional; defaults to 90. If CONTROLPLANE_CERTIFICATE_ROTATION_ENABLED is true, set to the number of days before the certificate expiration date to automatically renew cluster node certificates. For more information, see Control Plane Node Certificate Auto-Renewal.
ENABLE_AUDIT_LOGGING Optional; defaults to false. Audit logging for the Kubernetes API server. To enable audit logging, set to true. Tanzu Kubernetes Grid writes these logs to /var/log/kubernetes/audit.log. For more information, see Audit Logging.
Enabling Kubernetes auditing can result in very high log volumes. To handle this quantity, VMware recommends using a log forwarder such as Fluent Bit.
ENABLE_AUTOSCALER Optional; defaults to false. If set to true, you must set additional variables in the Cluster Autoscaler table below.
ENABLE_CEIP_PARTICIPATION Optional; defaults to true. Set to false to opt out of the VMware Customer Experience Improvement Program. You can also opt in or out of the program after deploying the management cluster. For information, see Opt In or Out of the VMware CEIP in Manage Participation in CEIP and Customer Experience Improvement Program (“CEIP”).
ENABLE_DEFAULT_STORAGE_CLASS Optional; defaults to true. For information about storage classes, see Dynamic Storage.
ENABLE_MHC etc. Optional; see Machine Health Checks below for the full set of MHC variables and how they work.
For workload clusters created by vSphere with Tanzu, set ENABLE_MHC to false.
IDENTITY_MANAGEMENT_TYPE Optional; defaults to none.

For management clusters: Set oidc or ldap to enable, with additional OIDC or LDAP settings required as listed in the Identity Providers - OIDC and LDAP tables below.
Omit or set to none to deactivate identity management. It is strongly recommended to enable identity management for production deployments.
Note You do not need to configure OIDC or LDAP settings in the configuration file for workload clusters.

INFRASTRUCTURE_PROVIDER Required. Set to vsphere, aws, azure, or tkg-service-vsphere.
NAMESPACE Optional; defaults to default, to deploy workload clusters to a namespace named default.
SERVICE_CIDR Optional; defaults to to 100.64.0.0/13. The CIDR range to use for the Kubernetes services. Change this value only if the default range is unavailable.

Identity Providers - OIDC

If you set IDENTITY_MANAGEMENT_TYPE: oidc, set the following variables to configure an OIDC identity provider. For more information, see Configure Identity Management in Create a Management Cluster Configuration File.

Tanzu Kubernetes Grid integrates with OIDC using Pinniped, as described in About Identity and Access Management.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
CERT_DURATION Optional; defaults to 2160h. Set this variable if you configure Pinniped and Dex to use self-signed certificates managed by cert-manager.
CERT_RENEW_BEFORE Optional; defaults to 360h. Set this variable if you configure Pinniped and Dex to use self-signed certificates managed by cert-manager.
OIDC_IDENTITY_PROVIDER_CLIENT_ID Required. The client_id value that you obtain from your OIDC provider. For example, if your provider is Okta, log in to Okta, create a Web application, and select the Client Credentials options in order to get a client_id and secret.
OIDC_IDENTITY_PROVIDER_CLIENT_SECRET Required. The secret value that you obtain from your OIDC provider. Do not base64-encode this value.
OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM Optional. The name of your groups claim. This is used to set a user’s group in the JSON Web Token (JWT) claim. The default value is groups.
OIDC_IDENTITY_PROVIDER_ISSUER_URL Required. The IP or DNS address of your OIDC server.
OIDC_IDENTITY_PROVIDER_SCOPES Required. A comma separated list of additional scopes to request in the token response. For example, "email,offline_access".
OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM Required. The name of your username claim. This is used to set a user’s username in the JWT claim. Depending on your provider, enter claims such as user_name, email, or code.
SUPERVISOR_ISSUER_URL Do not modify. This variable is automatically updated in the configuration file when you run the tanzu cluster create command command.
SUPERVISOR_ISSUER_CA_BUNDLE_DATA_B64 Do not modify. This variable is automatically updated in the configuration file when you run the tanzu cluster create command command.

Identity Providers - LDAP

If you set IDENTITY_MANAGEMENT_TYPE: ldap, set the following variables to configure an LDAP identity provider. For more information, see Configure Identity Management in Create a Management Cluster Configuration File.

Tanzu Kubernetes Grid integrates with LDAP using Pinniped, as described in About Identity and Access Management.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
LDAP_BIND_DN Optional. The DN for an application service account. The connector uses these credentials to search for users and groups. Not required if the LDAP server provides access for anonymous authentication.
For security, a future version of TKG will require this setting.
LDAP_BIND_PASSWORD Optional. The password for an application service account, if LDAP_BIND_DN is set.
For security, a future version of TKG will require this setting.
LDAP_GROUP_SEARCH_BASE_DN Optional. The point from which to start the LDAP search. For example, OU=Groups,OU=domain,DC=io.
LDAP_GROUP_SEARCH_FILTER Optional. An optional filter to be used by the LDAP search
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE Optional. The attribute of the group record that holds the user/member information. For example, member.
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE Optional. The LDAP attribute that holds the name of the group. For example, cn.
LDAP_GROUP_SEARCH_USER_ATTRIBUTE Optional. The attribute of the user record that is used as the value of the membership attribute of the group record. For example, DN, distinguishedName.
The DN setting is case-sensitive and should always be capitalized; see Authentication Through LDAP > Configuration in the Dex documentation.
LDAP_HOST Required. The IP or DNS address of your LDAP server. If the LDAP server is listening on the default port 636, which is the secured configuration, you do not need to specify the port. If the LDAP server is listening on a different port, provide the address and port of the LDAP server, in the form “host:port”.
LDAP_ROOT_CA_DATA_B64 Optional. If you are using an LDAPS endpoint, paste the base64 encoded contents of the LDAP server certificate.
LDAP_USER_SEARCH_BASE_DN Optional. The point from which to start the LDAP search. For example, OU=Users,OU=domain,DC=io.
LDAP_USER_SEARCH_EMAIL_ATTRIBUTE Optional. The LDAP attribute that holds the email address. For example, email, userPrincipalName.
LDAP_USER_SEARCH_FILTER Optional. An optional filter to be used by the LDAP search.
LDAP_USER_SEARCH_ID_ATTRIBUTE Optional. The LDAP attribute that contains the user ID. Similar to LDAP_USER_SEARCH_USERNAME.
LDAP_USER_SEARCH_NAME_ATTRIBUTE Optional. The LDAP attribute that holds the given name of the user. For example, givenName. This variable is not exposed in the installer interface.
LDAP_USER_SEARCH_USERNAME Optional. The LDAP attribute that contains the user ID. For example, uid, sAMAccountName.

Node Configuration

Configure the control plane and worker nodes, and the operating system that the node instances run. For more information, see Configure Node Settings in Create a Management Cluster Configuration File.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
CONTROL_PLANE_MACHINE_COUNT Optional. Deploy a workload cluster with more control plane nodes than the dev and prod plans define by default. The number of control plane nodes that you specify must be odd.
CONTROL_PLANE_NODE_LABELS Class-based clusters only. Assign custom persistent labels to the control plane nodes, for example CONTROL_PLANE_NODE_LABELS: ‘key1=value1,key2=value2’. To configure this in a legacy, plan-based cluster, use a ytt overlay as described in Custom Node Labels.
Worker node labels are set in their node pool as described in Manage Node Pools of Different VM Types.
CONTROL_PLANE_NODE_SEARCH_DOMAINS Class-based clusters only. Configures the .local search domains for cluster nodes, for example CONTROL_PLANE_NODE_SEARCH_DOMAINS: corp.local. To configure this in a legacy, plan-based cluster on vSphere, use a ytt overlay as described in Resolve .local Domain.
CONTROLPLANE_SIZE Optional. Size for control plane node VMs. Overrides the SIZE and VSPHERE_CONTROL_PLANE_ parameters. See SIZE for possible values.
OS_ARCH Optional. Architecture for node VM OS. Default and only current choice is amd64.
OS_NAME Optional. Node VM OS. Defaults to ubuntu for Ubuntu LTS. Can also be photon for Photon OS on vSphere or amazon for Amazon Linux on AWS.
OS_VERSION Optional. Version for OS_NAME OS, above. Defaults to 20.04 for Ubuntu. Can be 3 for Photon on vSphere and 2 for Amazon Linux on AWS.
SIZE Optional. Size for both control plane and worker node VMs. Overrides the VSPHERE_CONTROL_PLANE_* and VSPHERE_WORKER_* parameters. For vSphere, set small, medium, large, or extra-large as described in Predefined Node Sizes. For AWS, set an instance type, for example, t3.small. For Azure, set an instance type, for example, Standard_D2s_v3.
WORKER_MACHINE_COUNT Optional. Deploy a workload cluster with more worker nodes than the dev and prod plans define by default.
WORKER_NODE_SEARCH_DOMAINS Class-based clusters only. Configures the .local search domains for cluster nodes, for example CONTROL_PLANE_NODE_SEARCH_DOMAINS: corp.local. To configure this in a legacy, plan-based cluster on vSphere, use a ytt overlay as described in Resolve .local Domain.
WORKER_SIZE Optional. Size for worker node VMs. Overrides the SIZE and VSPHERE_WORKER_ parameters. See SIZE for possible values.
CUSTOM_TDNF_REPOSITORY_CERTIFICATE (Technical Preview)

Optional. Set if you use a customized tdnf repository server with a self-signed certificate, rather than the photon default tdnf repository server. Enter the contents of the base64 encoded certificate of tdnf repository server.

It deletes all repos under /etc/yum.repos.d/ and makes the TKG node trust the certificate.

Kubernetes Tuning (Class-based clusters only)

Kubernetes API servers, Kubelets, and other components exposes various configuration flags for tuning, for example the --tls-cipher-suites flag to configure cipher suites for security hardening, the --election-timeout flag to increase etcd timeout in a large cluster etc.

Important

These Kubernetes configuration variables are for advanced users. VMware does not guarantee the functionality of clusters configured with arbitrary combinations of these settings.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
APISERVER_EXTRA_ARGS Class-based clusters only. Specify kube-apiserver flags in the format “key1=value1;key2=value2”. For example, set cipher suites with APISERVER_EXTRA_ARGS: “tls-min-version=VersionTLS12;tls-cipher-suites=TLS_RSA_WITH_AES_256_GCM_SHA384”
CONTROLPLANE_KUBELET_EXTRA_ARGS Class-based clusters only. Specify control plane kubelet flags in the format “key1=value1;key2=value2”. For example, limit the number of control plane pods with CONTROLPLANE_KUBELET_EXTRA_ARGS: “max-pods=50”
ETCD_EXTRA_ARGS Class-based clusters only. Specify etcd flags in the format “key1=value1;key2=value2”. For example, if the cluster has more than 500 nodes or the storage has bad performance, you can increase heartbeat-interval and election-timeout with ETCD_EXTRA_ARGS: “heartbeat-interval=300;election-timeout=2000”
KUBE_CONTROLLER_MANAGER_EXTRA_ARGS Class-based clusters only. Specify kube-controller-manager flags in the format “key1=value1;key2=value2”. For example, turn off performance profiling with KUBE_CONTROLLER_MANAGER_EXTRA_ARGS: “profiling=false”
KUBE_SCHEDULER_EXTRA_ARGS Class-based clusters only. Specify kube-scheduler flags in the format “key1=value1;key2=value2”. For example, enable Single Pod Access Mode with KUBE_SCHEDULER_EXTRA_ARGS: “feature-gates=ReadWriteOncePod=true”
WORKER_KUBELET_EXTRA_ARGS Class-based clusters only. Specify worker kubelet flags in the format “key1=value1;key2=value2”. For example, limit the number of worker pods with WORKER_KUBELET_EXTRA_ARGS: “max-pods=50”

Cluster Autoscaler

The following additional variables are available to configure if ENABLE_AUTOSCALER is set to true. For information about Cluster Autoscaler, see Scale Workload Clusters.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
AUTOSCALER_MAX_NODES_TOTAL Optional; defaults to 0. Maximum total number of nodes in the cluster, worker plus control plane. Sets the value for the Cluster Autoscaler parameter max-nodes-total. Cluster Autoscaler does not attempt to scale your cluster beyond this limit. If set to 0, Cluster Autoscaler makes scaling decisions based on the minimum and maximum SIZE settings that you configure.
AUTOSCALER_SCALE_DOWN_DELAY_AFTER_ADD Optional; defaults to 10m. Sets the value for the Cluster Autoscaler parameter scale-down-delay-after-add. Amount of time that Cluster Autoscaler waits after a scale-up operation and then resumes scale-down scans.
AUTOSCALER_SCALE_DOWN_DELAY_AFTER_DELETE Optional; defaults to 10s. Sets the value for the Cluster Autoscaler parameter scale-down-delay-after-delete. Amount of time that Cluster Autoscaler waits after deleting a node and then resumes scale-down scans.
AUTOSCALER_SCALE_DOWN_DELAY_AFTER_FAILURE Optional; defaults to 3m. Sets the value for the Cluster Autoscaler parameter scale-down-delay-after-failure. Amount of time that Cluster Autoscaler waits after a scale-down failure and then resumes scale-down scans.
AUTOSCALER_SCALE_DOWN_UNNEEDED_TIME Optional; defaults to 10m. Sets the value for the Cluster Autoscaler parameter scale-down-unneeded-time. Amount of time that Cluster Autoscaler must wait before scaling down an eligible node.
AUTOSCALER_MAX_NODE_PROVISION_TIME Optional; defaults to 15m. Sets the value for the Cluster Autoscaler parameter max-node-provision-time. Maximum amount of time Cluster Autoscaler waits for a node to be provisioned.
AUTOSCALER_MIN_SIZE_0 Required, all IaaSes. Minimum number of worker nodes. Cluster Autoscaler does not attempt to scale down the nodes below this limit. For prod clusters on AWS, AUTOSCALER_MIN_SIZE_0 sets the minimum number of worker nodes in the first AZ. If not set, defaults to the value of WORKER_MACHINE_COUNT for clusters with a single worker node or WORKER_MACHINE_COUNT_0 for clusters with multiple worker nodes.
AUTOSCALER_MAX_SIZE_0 Required, all IaaSes. Maximum number of worker nodes. Cluster Autoscaler does not attempt to scale up the nodes beyond this limit. For prod clusters on AWS, AUTOSCALER_MAX_SIZE_0 sets the maximum number of worker nodes in the first AZ. If not set, defaults to the value of WORKER_MACHINE_COUNT for clusters with a single worker node or WORKER_MACHINE_COUNT_0 for clusters with multiple worker nodes.
AUTOSCALER_MIN_SIZE_1 Required, use only for prod clusters on AWS. Minimum number of worker nodes in the second AZ. Cluster Autoscaler does not attempt to scale down the nodes below this limit. If not set, defaults to the value of WORKER_MACHINE_COUNT_1.
AUTOSCALER_MAX_SIZE_1 Required, use only for prod clusters on AWS. Maximum number of worker nodes nodes in the second AZ. Cluster Autoscaler does not attempt to scale up the nodes beyond this limit. If not set, defaults to the value of WORKER_MACHINE_COUNT_1.
AUTOSCALER_MIN_SIZE_2 Required, use only for prod clusters on AWS. Minimum number of worker nodes in the third AZ. Cluster Autoscaler does not attempt to scale down the nodes below this limit. If not set, defaults to the value of WORKER_MACHINE_COUNT_2.
AUTOSCALER_MAX_SIZE_2 Required, use only for prod clusters on AWS. Maximum number of worker nodes in the third AZ. Cluster Autoscaler does not attempt to scale up the nodes beyond this limit. If not set, defaults to the value of WORKER_MACHINE_COUNT_2.

Proxy Configuration

If your environment is internet-restricted or otherwise includes proxies, you can optionally configure Tanzu Kubernetes Grid to send outgoing HTTP and HTTPS traffic from kubelet, containerd, and the control plane to your proxies.

Tanzu Kubernetes Grid allows you to enable proxies for any of the following:

  • For both the management cluster and one or more workload clusters
  • For the management cluster only
  • For one or more workload clusters

For more information, see Configure Proxies in Create a Management Cluster Configuration File.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
TKG_HTTP_PROXY_ENABLED

Optional, to send outgoing HTTP(S) traffic from the management cluster to a proxy, for example in an internet-restricted environment, set this to true.

TKG_HTTP_PROXY

Optional, set if you want to configure a proxy; to deactivate your proxy configuration for an individual cluster, set this to "“. Only applicable when TKG_HTTP_PROXY_ENABLED = true. The URL of your HTTP proxy, formatted as follows: PROTOCOL://USERNAME:PASSWORD@FQDN-OR-IP:PORT, where:

  • Required. PROTOCOL is http.
  • Optional. USERNAME and PASSWORD are your HTTP proxy username and password. Include these if the proxy requires authentication.
  • Required. FQDN-OR-IP and PORT are the FQDN or IP address and port number of your HTTP proxy.

For example, http://user:[email protected]:1234 or http://myproxy.com:1234. If you set TKG_HTTP_PROXY, you must also set TKG_HTTPS_PROXY.

TKG_HTTPS_PROXY Optional, set if you want to configure a proxy. Only applicable when TKG_HTTP_PROXY_ENABLED = true. The URL of your HTTPS proxy. You can set this variable to the same value as TKG_HTTP_PROXY or provide a different value. The URL must start with http://. If you set TKG_HTTPS_PROXY, you must also set TKG_HTTP_PROXY.
TKG_NO_PROXY

Optional. Only applicable when TKG_HTTP_PROXY_ENABLED = true.

One or more network CIDRs or hostnames that must bypass the HTTP(S) proxy, comma-separated and listed without spaces or wildcard characters (*). List hostname suffixes as .example.com, not *.example.com.

For example, .noproxy.example.com,noproxy.example.com,192.168.0.0/24.

Do not include: You need not include localhost, 127.0.0.1, the values of CLUSTER_CIDR and SERVICE_CIDR, .svc, or .svc.cluster.local. On Azure, you need not include your Azure VNET CIDR, 169.254.0.0/16, or 168.63.129.16. Tanzu Kubernetes Grid automatically appends these values to the to the value that you set in TKG_NO_PROXY.

Include:

  • For vSphere, include the CIDR of VSPHERE_NETWORK, which includes the IP address of your control plane endpoint.
  • If you set VSPHERE_CONTROL_PLANE_ENDPOINT to an FQDN, include both the FQDN and your VSPHERE_NETWORK setting.
  • If you are using Tanzu Application Platform or Tanzu Build Service, include .svc.cluster.local. with a . character at the end.

Important: In environments where Tanzu Kubernetes Grid runs behind a proxy, TKG_NO_PROXY lets cluster VMs communicate directly with infrastructure that runs the same network, behind the same proxy. This may include, but is not limited to, your infrastructure, OIDC or LDAP server, Harbor, and VMware NSX and NSX Advanced Load Balancer (vSphere). Set TKG_NO_PROXY to include all such endpoints that clusters must access but that are not reachable by your proxies.

TKG_PROXY_CA_CERT Optional. Set if your proxy server uses a self-signed certificate. Provide the CA certificate in base64 encoded format, for example TKG_PROXY_CA_CERT: “LS0t[…]tLS0tLQ==””.
TKG_NODE_SYSTEM_WIDE_PROXY (Technical Preview)

Optional. Puts the following settings into /etc/environment and /etc/profile:

export HTTP_PROXY=$TKG_HTTP_PROXY export HTTPS_PROXY=$TKG_HTTPS_PROXY export NO_PROXY=$TKG_NO_PROXY

When users SSH in to the TKG node and run commands, by default, the commands use the defined variables. Systemd processes are not impacted.

Antrea CNI Configuration

Additonal optional variables to set if CNI is set to antrea. For more information, see Configure Antrea CNI in Create a Management Cluster Configuration File.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD Optional; defaults to false. Set to true to deactivate Antrea’s UDP checksum offloading. Setting this variable to true avoids known issues with underlay network and physical NIC network drivers.
ANTREA_EGRESS Optional; defaults to true. Enable this variable to define the SNAT IP that is used for Pod traffic egressing the cluster. For more information, see Egress in the Antrea documentation.
ANTREA_EGRESS_EXCEPT_CIDRS Optional. The CIDR ranges that egresses will not SNAT for outgoing Pod traffic. Include the quotes (““).For example, “10.0.0.0/6”.
ANTREA_ENABLE_USAGE_REPORTING Optional; defaults to false. Activates or deactivates usage reporting (telemetry).
ANTREA_FLOWEXPORTER Optional; defaults to false. Set to true for visibility of network flow. Flow exporter polls the conntrack flows periodically and exports the flows as IPFIX flow records. For more information, see Network Flow Visibility in Antrea in the Antrea documentation.
ANTREA_FLOWEXPORTER_ACTIVE_TIMEOUT

Optional; defaults to “60s”. This variable provides the active flow export timeout, which is the timeout after which a flow record is sent to the collector for active flows. Include the quotes (””).

Note: The valid time units are ns, us (or s), ms, s, m, h.

ANTREA_FLOWEXPORTER_COLLECTOR_ADDRESS Optional. This variable provides the IPFIX collector address. Include the quotes (““). The default value is “flow-aggregator.flow-aggregator.svc:4739:tls”. For more information, see Network Flow Visibility in Antrea in the Antrea documentation.
ANTREA_FLOWEXPORTER_IDLE_TIMEOUT

Optional; defaults to “15s”. This variable provides the idle flow export timeout, which is the timeout after which a flow record is sent to the collector for idle flows. Include the quotes (””).

Note: The valid time units are ns, us (or s), ms, s, m, h.

ANTREA_FLOWEXPORTER_POLL_INTERVAL

Optional; defaults to “5s”. This variable determines the frequency of dumped connections from the conntrack module by the flow exporter. Include the quotes (““).

Note: The valid time units are ns, us (or s), ms, s, m, h.

ANTREA_IPAM Optional; defaults to false. Set to true to allocate IP addresses from IPPools. The desired set of IP ranges, optionally with VLANs, are defined with IPPool CRD. For more information, see Antrea IPAM Capabilities in the Antrea documentation.
ANTREA_KUBE_APISERVER_OVERRIDE Optional, Experimental. Specify the address of the workload cluster’s Kubernetes API server, as CONTROL-PLANE-VIP:PORT. This address should be either maintained by kube-vip or a static IP for a control plane node.
Warning: removing kube-proxy and setting ANTREA_KUBE_APISERVER_OVERRIDE so that Antrea alone serves all traffic is Experimental. For more information, see Configure Antrea CNI and Removing kube-proxy in the Antrea documentation.
ANTREA_MULTICAST Optional; defaults to false. Set to true to multicast traffic within the cluster network (between pods), and between the external network and the cluster network.
ANTREA_MULTICAST_INTERFACES Optional. The names of the interface on nodes that are used to forward multicast traffic. Include the quotes (””). For example, “eth0”.
ANTREA_NETWORKPOLICY_STATS Optional; defaults to true. Enable this variable to collect NetworkPolicy statistics. The statistical data includes the total number of sessions, packets, and bytes allowed or denied by a NetworkPolicy.
ANTREA_NO_SNAT Optional; defaults to false. Set to true to deactivate Source Network Address Translation (SNAT).
ANTREA_NODEPORTLOCAL Optional; defaults to true. Set to false to deactivate the NodePortLocal mode. For more information, see NodePortLocal (NPL) in the Antrea documentation.
ANTREA_NODEPORTLOCAL_ENABLED Optional. Set to true to enable NodePortLocal mode, as described in NodePortLocal (NPL) in the Antrea documentation.
ANTREA_NODEPORTLOCAL_PORTRANGE Optional; defaults to 61000-62000. For more information, see NodePortLocal (NPL) in the Antrea documentation.
ANTREA_POLICY Optional; defaults to true. Activates or deactivates the Antrea-native policy API, which are policy CRDs specific to Antrea. Also, the implementation of Kubernetes Network Policies remains active when this variable is enabled. For information about using network policies, see Antrea Network Policy CRDs in the Antrea documentation.
ANTREA_PROXY Optional; defaults to true. Activates or deactivates AntreaProxy, to replace kube-proxy for pod-to-ClusterIP service traffic, for better performance and lower latency. Note that kube-proxy is still used for other types of service traffic. For more information, see Configure Antrea CNI and AntreaProxy in the Antrea documentation.
ANTREA_PROXY_ALL Optional; defaults to false. Activates or deactivates AntreaProxy to handle all types of service traffic (ClusterIP, NodePort, and LoadBalancer) from all nodes and pods. If present, kube-proxy also redundantly handles all service traffic from nodes and hostNetwork pods, including Antrea components, typically before AntreaProxy does. For more information, see Configure Antrea CNI and AntreaProxy in the Antrea documentation.
ANTREA_PROXY_LOAD_BALANCER_IPS Optional; defaults to true. Set to false to direct the load-balance traffic to the external LoadBalancer.
ANTREA_PROXY_NODEPORT_ADDRS Optional. The IPv4 or IPv6 address for NodePort. Include the quotes (““).
ANTREA_PROXY_SKIP_SERVICES Optional. An array of string values that can be used to indicate a list of services that AntreaProxy should ignore. Traffic to these services will not be load-balanced. Include the quotes (””). For example, a valid ClusterIP, such as 10.11.1.2, or a service name with a Namespace, such as kube-system/kube-dns, are valid values. For more information, see Special use cases in the Antrea documentation.
ANTREA_SERVICE_EXTERNALIP Optional; defaults to false. Set to true, with ANTREA_PROXY_ALL also true, to enable a controller to allocate external IPs from an ExternalIPPool resource for services with type LoadBalancer. For more information on how to implement services of type LoadBalancer, see Service of type LoadBalancer in the Antrea documentation.
ANTREA_TRACEFLOW Optional; defaults to true. For information about using Traceflow, see the Traceflow User Guide in the Antrea documentation.
ANTREA_TRAFFIC_ENCAP_MODE

Optional; defaults to “encap”. Set to noEncap, hybrid, or NetworkPolicyOnly. For information about using NoEncap or Hybrid traffic modes, see NoEncap and Hybrid Traffic Modes of Antrea in the Antrea documentation.

Note:noEncap and hybrid modes require specific configuration of the node network to prevent them from disabling pod communication between nodes.

noEncap mode is supported on vSphere only and can deactivate pod communication between nodes. Do not set ANTREA_TRAFFIC_ENCAP_MODE to noEncap in clusters where pods need to communicate across nodes.

ANTREA_TRANSPORT_INTERFACE Optional. The name of the interface on node that is used for tunneling or routing the traffic. Include the quotes (““). For example, “eth0”.
ANTREA_TRANSPORT_INTERFACE_CIDRS Optional. The network CIDRs of the interface on node that is used for tunneling or routing the traffic. Include the quotes (””). For example, “10.0.0.2/24”.

Machine Health Checks

If you want to configure machine health checks for management and workload clusters, set the following variables. For more information, see Configure Machine Health Checks in Create a Management Cluster Configuration File. For information about how to perform machine health check operations after cluster deployment, see Configure Machine Health Checks for Workload Clusters.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
ENABLE_MHC class-based: ✖
plan-based: ✔
Optional, set to true or false to activate or deactivate the MachineHealthCheck controller on both the control plane and worker nodes of the target management or workload cluster. Or leave empty and set ENABLE_MHC_CONTROL_PLANE and ENABLE_MHC_WORKER_NODE separately for control plane and worker nodes. Default is empty.
The controller provides machine health monitoring and auto-repair.
For workload clusters created by vSphere with Tanzu, set to false.
ENABLE_MHC_CONTROL_PLANE Optional; defaults to true. For more information, see the table below.
ENABLE_MHC_WORKER_NODE Optional; defaults to true. For more information, see the table below.
MHC_MAX_UNHEALTHY_CONTROL_PLANE Optional; class-based clusters only. Defaults to 100%. If the number of unhealthy machines exceeds the value you set, the MachineHealthCheck controller does not perform remediation.
MHC_MAX_UNHEALTHY_WORKER_NODE Optional; class-based clusters only. Defaults to 100%. If the number of unhealthy machines exceeds the value you set, the MachineHealthCheck controller does not perform remediation.
MHC_FALSE_STATUS_TIMEOUT class-based: ✖
plan-based: ✔
Optional; defaults to 12m. How long the MachineHealthCheck controller allows a node’s Ready condition to remain False before considering the machine unhealthy and recreating it.
MHC_UNKNOWN_STATUS_TIMEOUT Optional; defaults to 5m. How long the MachineHealthCheck controller allows a node’s Ready condition to remain Unknown before considering the machine unhealthy and recreating it.
NODE_STARTUP_TIMEOUT Optional; defaults to 20m. How long the MachineHealthCheck controller waits for a node to join a cluster before considering the machine unhealthy and recreating it.

Use the table below to determine how to configure the ENABLE_MHC, ENABLE_MHC_CONTROL_PLANE, and ENABLE_MHC_WORKER_NODE variables.

Value of ENABLE_MHC Value of ENABLE_MHC_CONTROL_PLANE Value of ENABLE_MHC_WORKER_NODE Control plane remediation enabled? Worker node remediation enabled?
true / Empty true / false / Empty true / false / Empty Yes Yes
false true / Empty true / Empty Yes Yes
false true / Empty false Yes No
false false true / Empty No Yes

Private Image Registry Configuration

If you deploy Tanzu Kubernetes Grid management clusters and Kubernetes clusters in environments that are not connected to the Internet, you need to set up a private image repository within your firewall and populate it with the Tanzu Kubernetes Grid images. For information about setting up a private image repository, see Prepare an Internet-Restricted Environment.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
ADDITIONAL_IMAGE_REGISTRY_1, ADDITIONAL_IMAGE_REGISTRY_2, ADDITIONAL_IMAGE_REGISTRY_3 Class-based workload and standalone management clusters only. The IP addresses or FQDNs of up to three trusted private registries, in addition to the primary image registry set by TKG_CUSTOM_IMAGE_REPOSITORY, for workload cluster nodes to access. See Trusted Registries for a Class-Based Cluster.
ADDITIONAL_IMAGE_REGISTRY_1_CA_CERTIFICATE, ADDITIONAL_IMAGE_REGISTRY_2_CA_CERTIFICATE, ADDITIONAL_IMAGE_REGISTRY_3_CA_CERTIFICATE Class-based workload and standalone management clusters only. The CA certificates in base64-encoded format of private image registries configured with ADDITIONAL_IMAGE_REGISTRY- above. For example ADDITIONAL_IMAGE_REGISTRY_1_CA_CERTIFICATE: “LS0t[…]tLS0tLQ==”..
ADDITIONAL_IMAGE_REGISTRY_1_SKIP_TLS_VERIFY, ADDITIONAL_IMAGE_REGISTRY_2_SKIP_TLS_VERIFY, ADDITIONAL_IMAGE_REGISTRY_3_SKIP_TLS_VERIFY Class-based workload and standalone management clusters only. Set to true for any private image registries configured with ADDITIONAL_IMAGE_REGISTRY- above that use a self-signed certificate but do not use ADDITIONAL_IMAGE_REGISTRY_CA_CERTIFICATE. Because the Tanzu connectivity webhook injects the Harbor CA certificate into cluster nodes, ADDITIONAL_IMAGE_REGISTRY_SKIP_TLS_VERIFY should always be set to false when using Harbor.
TKG_CUSTOM_IMAGE_REPOSITORY Required if you deploy Tanzu Kubernetes Grid in an Internet-restricted environment. Provide the IP address or FQDN of the private registry that contains TKG system images that clusters bootstrap from, called the primary image registry below. For example, custom-image-repository.io/yourproject.
TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY Optional. Set to true if your private primary image registry uses a self-signed certificate and you do not use TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE. Because the Tanzu connectivity webhook injects the Harbor CA certificate into cluster nodes, TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY should always be set to false when using Harbor.
TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE Optional. Set if your private primary image registry uses a self-signed certificate. Provide the CA certificate in base64 encoded format, for example TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: “LS0t[…]tLS0tLQ==”.

vSphere

The options in the table below are the minimum options that you specify in the cluster configuration file when deploying workload clusters to vSphere. Most of these options are the same for both the workload cluster and the management cluster that you use to deploy it.

For more information about the configuration files for vSphere, see Management Cluster Configuration for vSphere and Deploy Workload Clusters to vSphere.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
DEPLOY_TKG_ON_VSPHERE7 Optional. If deploying to vSphere 7 or 8, set to true to skip the prompt about deployment on vSphere 7 or 8, or false. See Management Clusters on vSphere with Tanzu.
ENABLE_TKGS_ON_VSPHERE7 Optional if deploying to vSphere 7 or 8, set to true to be redirected to the vSphere with Tanzu enablement UI page, or false. See Management Clusters on vSphere with Tanzu.
NTP_SERVERS Class-based clusters only. Configures the cluster’s NTP server if you are deploying clusters in a vSphere environment that lacks DHCP Option 42, for example NTP_SERVERS: time.google.com. To configure this in a legacy, plan-based cluster, use a ytt overlay as described in Configuring NTP without DHCP Option 42 (vSphere).
TKG_IP_FAMILY Optional. To deploy a cluster to vSphere 7 or 8 with pure IPv6, set this to ipv6. For dual-stack networking (experimental), see Dual-Stack Clusters .
VIP_NETWORK_INTERFACE Optional. Set to eth0, eth1, etc. Network interface name, for example an Ethernet interface. Defaults to eth0.
VSPHERE_CONTROL_PLANE_DISK_GIB Optional. The size in gigabytes of the disk for the control plane node VMs. Include the quotes (““). For example, "30".
VSPHERE_AZ_0, VSPHERE_AZ_1, VSPHERE_AZ_2 Optional The deployment zones to which machine deployments in a cluster get deployed to. See Deploy Workload Clusters to Multiple Availability Zones
VSPHERE_CONTROL_PLANE_ENDPOINT Required for Kube-Vip. Static virtual IP address, or fully qualified domain name (FQDN) mapped to static address, for API requests to the cluster. This setting is required if you are using Kube-Vip for your API server endpoint, as configured by setting AVI_CONTROL_PLANE_HA_PROVIDER to false
If you use NSX Advanced Load Balancer, AVI_CONTROL_PLANE_HA_PROVIDER: true, you can:
  • Leave this field blank if you do not need a specific address as the control plane endpoint
  • Set this to a specific address within the IPAM Profile’s VIP Network range, and manually add the address to the Static IP pool
  • Set this field to an FQDN
VSPHERE_CONTROL_PLANE_ENDPOINT_PORT Optional, set if you want to override the Kubernetes API server port for deployments on vSphere with NSX Advanced Load Balancer. The default port is 6443. To override the Kubernetes API server port for deployments on vSphere without NSX Advanced Load Balancer, set CLUSTER_API_SERVER_PORT.
VSPHERE_CONTROL_PLANE_MEM_MIB Optional. The amount of memory in megabytes for the control plane node VMs. Include the quotes (””). For example, "2048".
VSPHERE_CONTROL_PLANE_NUM_CPUS Optional. The number of CPUs for the control plane node VMs. Include the quotes (““). Must be at least 2. For example, "2".
VSPHERE_DATACENTER Required. The name of the datacenter in which to deploy the cluster, as it appears in the vSphere inventory. For example, /MY-DATACENTER.
VSPHERE_DATASTORE Required. The name of the vSphere datastore for the cluster to use, as it appears in the vSphere inventory. For example, /MY-DATACENTER/datastore/MyDatastore.
VSPHERE_FOLDER Required. The name of an existing VM folder in which to place Tanzu Kubernetes Grid VMs, as it appears in the vSphere inventory. For example, if you created a folder named TKG, the path is /MY-DATACENTER/vm/TKG.
VSPHERE_INSECURE Optional. Set to true to bypass thumbprint verification. If false, set VSPHERE_TLS_THUMBPRINT.
VSPHERE_NETWORK Required. The name of an existing vSphere network to use as the Kubernetes service network, as it appears in the vSphere inventory. For example, VM Network.
VSPHERE_PASSWORD Required. The password for the vSphere user account. This value is base64-encoded when you run tanzu cluster create.
VSPHERE_REGION Optional. Tag for region in vCenter, used to assign CSI storage in an environment with multiple data centers or host clusters. See Deploy a Cluster with Region and Zone Tags for CSI
VSPHERE_RESOURCE_POOL Required. The name of an existing resource pool in which to place this Tanzu Kubernetes Grid instance, as it appears in the vSphere inventory. To use the root resource pool for a cluster, enter the full path, for example for a cluster named cluster0 in datacenter MY-DATACENTER, the full path is /MY-DATACENTER/host/cluster0/Resources.
VSPHERE_SERVER Required. The IP address or FQDN of the vCenter Server instance on which to deploy the workload cluster.
VSPHERE_SSH_AUTHORIZED_KEY Required. Paste in the contents of the SSH public key that you created in Deploy a Management Cluster to vSphere. For example, "ssh-rsa NzaC1yc2EA […] hnng2OYYSl+8ZyNz3fmRGX8uPYqw== [email protected]".
VSPHERE_STORAGE_POLICY_ID Optional. The name of a VM storage policy for the management cluster, as it appears in Policies and Profiles > VM Storage Policies.
If VSPHERE_DATASTORE is set, the storage policy must include it. Otherwise, the cluster creation process chooses a datastore that compatible with the policy.
VSPHERE_TEMPLATE Optional. Specify the path to an OVA file if you are using multiple custom OVA images for the same Kubernetes version, in the format /MY-DC/vm/MY-FOLDER-PATH/MY-IMAGE. For more information, see Deploy a Cluster with a Custom OVA Image.
VSPHERE_TLS_THUMBPRINT Required if VSPHERE_INSECURE is false. The thumbprint of the vCenter Server certificate. For information about how to obtain the vCenter Server certificate thumbprint, see Obtain vSphere Certificate Thumbprints. This value can be skipped if user wants to use insecure connection by setting VSPHERE_INSECURE to true.
VSPHERE_USERNAME Required. A vSphere user account, including the domain name, with the required privileges for Tanzu Kubernetes Grid operation. For example, [email protected].
VSPHERE_WORKER_DISK_GIB Optional. The size in gigabytes of the disk for the worker node VMs. Include the quotes (””). For example, "50".
VSPHERE_WORKER_MEM_MIB Optional. The amount of memory in megabytes for the worker node VMs. Include the quotes (““). For example, "4096".
VSPHERE_WORKER_NUM_CPUS Optional. The number of CPUs for the worker node VMs. Include the quotes (””). Must be at least 2. For example, "2".
VSPHERE_ZONE Optional. Tag for zone in vCenter, used to assign CSI storage in an environment with multiple data centers or host clusters. See Deploy a Cluster with Region and Zone Tags for CSI

Kube-VIP Load Balancer (Technical Preview)

For information about how to configure Kube-VIP as an L4 load balancer service, see Kube-VIP Load Balancer.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
KUBEVIP_LOADBALANCER_ENABLE Optional; defaults to false. Set to true or false. Enables Kube-VIP as a load balancer for workloads. If true, you must set one of the variables below.
KUBEVIP_LOADBALANCER_IP_RANGES A list of non-overlapping IP ranges to allocate for LoadBalancer type service IP. For example: 10.0.0.1-10.0.0.23,10.0.2.1-10.0.2.24.
KUBEVIP_LOADBALANCER_CIDRS A list of non-overlapping CIDRs to allocate for LoadBalancer type service IP. For example: 10.0.0.0/24,10.0.2/24. Overrides the setting for KUBEVIP_LOADBALANCER_IP_RANGES.

NSX Advanced Load Balancer

For information about how to deploy NSX Advanced Load Balancer, see Install NSX Advanced Load Balancer.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
AVI_ENABLE Optional; defaults to false. Set to true or false. Enables NSX Advanced Load Balancer as a load balancer for workloads. If true, you must set the required variables listed in NSX Advanced Load Balancer below.
AVI_ADMIN_CREDENTIAL_NAME Optional; defaults to avi-controller-credentials. The name of the Kubernetes Secret that contains the NSX Advanced Load Balancer controller admin username and password.
AVI_AKO_IMAGE_PULL_POLICY Optional; defaults to IfNotPresent.
AVI_CA_DATA_B64 Required. The contents of the Controller Certificate Authority that is used to sign the Controller certificate. It must be base64 encoded. Retrieve unencoded custom certificate contents as described in Avi Controller Setup: Custom Certificate.
AVI_CA_NAME Optional; defaults toavi-controller-ca. The name of the Kubernetes Secret that holds the NSX Advanced Load Balancer Controller Certificate Authority.
AVI_CLOUD_NAME Required. The cloud that you created in your NSX Advanced Load Balancer deployment. For example, Default-Cloud.
AVI_CONTROLLER Required. The IP or hostname of the NSX Advanced Load Balancer controller.
AVI_CONTROL_PLANE_HA_PROVIDER Required. Set to true to enable NSX Advanced Load Balancer as the control plane API server endpoint, or false to use Kube-Vip as the control plane endpoint.
AVI_CONTROL_PLANE_NETWORK Optional. Defines the VIP network of the workload cluster’s control plane. Use when you want to configure a separate VIP network for the workload clusters. This field is optional, and if it is left empty, it will use the same network as AVI_DATA_NETWORK.
AVI_CONTROL_PLANE_NETWORK_CIDR Optional; defaults to the same network as AVI_DATA_NETWORK_CIDR. The CIDR of the subnet to use for the workload cluster’s control plane. Use when you want to configure a separate VIP network for the workload clusters. You can see the subnet CIDR for a particular network in the Infrastructure - Networks view of the NSX Advanced Load Balancer interface.
AVI_DATA_NETWORK Required. The network’s name on which the floating IP subnet or IP Pool is assigned to a load balancer for traffic to applications hosted on workload clusters. This network must be present in the same vCenter Server instance as the Kubernetes network that Tanzu Kubernetes Grid uses, that you specify in the SERVICE_CIDR variable. This allows NSX Advanced Load Balancer to discover the Kubernetes network in vCenter Server and to deploy and configure service engines.
AVI_DATA_NETWORK_CIDR Required. The CIDR of the subnet to use for the load balancer VIP. This comes from one of the VIP network’s configured subnets. You can see the subnet CIDR for a particular network in the Infrastructure - Networks view of the NSX Advanced Load Balancer interface.
AVI_DISABLE_INGRESS_CLASS Optional; defaults to false. Deactivate Ingress Class..
AVI_DISABLE_STATIC_ROUTE_SYNC Optional; defaults to false. Set to true if the pod networks are reachable from the NSX ALB service engine.
AVI_INGRESS_DEFAULT_INGRESS_CONTROLLER Optional; defaults to false. Use AKO as the default Ingress Controller.
AVI_INGRESS_NODE_NETWORK_LIST Specifies the name of the port group (PG) network that your nodes are a part of and the associated CIDR that the CNI allocates to each node for that node to assign to its pods. It is best to update this in the AKODeploymentConfig file, see L7 Ingress in ClusterIP Mode, but if you do use the cluster configuration file, the format is similar to: ‘[{“networkName”: “vsphere-portgroup”,“cidrs”: [“100.64.42.0/24”]}]’
AVI_INGRESS_SERVICE_TYPE Optional. Specifies whether the AKO functions in ClusterIP, NodePort, or NodePortLocal mode. Defaults to NodePort.
AVI_INGRESS_SHARD_VS_SIZE Optional. AKO uses a sharding logic for Layer 7 ingress objects. A sharded VS involves hosting multiple insecure or secure ingresses hosted by one virtual IP or VIP. Set to LARGE, MEDIUM, or SMALL. Default SMALL. Use this to control the layer 7 VS numbers. This applies to both secure/insecure VSes but does not apply for passthrough.
AVI_LABELS Optional. When set, NSX Advanced Load Balancer is enabled only on workload clusters that have this label. Include the quotes (""). For example, AVI_LABELS: “{foo: ‘bar’}”.
AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_CIDR Optional. The CIDR of the subnet to use for the management cluster’s control plane. Use when you want to configure a separate VIP network for the management cluster’s control plane. You can see the subnet CIDR for a particular network in the Infrastructure - Networks view of the NSX Advanced Load Balancer interface. This field is optional, and if it is left empty, it will use the same network as AVI_DATA_NETWORK_CIDR.
AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_NAME Optional. Defines the VIP network of the management cluster’s control plane. Use when you want to configure a separate VIP network for the management cluster’s control plane. This field is optional, and if it is left empty, it will use the same network as AVI_DATA_NETWORK.
AVI_MANAGEMENT_CLUSTER_SERVICE_ENGINE_GROUP Optional. Specifies the name of the Service Engine group that is to be used by AKO in the management cluster. This field is optional, and if it is left empty, it will use the same network as AVI_SERVICE_ENGINE_GROUP.
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME Optional; defaults to the same network as AVI_DATA_NETWORK. The network’s name where you assign a floating IP subnet or IP pool to a load balancer for management cluster and workload cluster control plane (if using NSX ALB to provide control plane HA). This network must be present in the same vCenter Server instance as the Kubernetes network that Tanzu Kubernetes Grid uses, which you specify in the SERVICE_CIDR variable for the management cluster. This allows NSX Advanced Load Balancer to discover the Kubernetes network in vCenter Server and to deploy and configure service engines.
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR Optional; defaults to the same network as AVI_DATA_NETWORK_CIDR. The CIDR of the subnet to use for the management cluster and workload cluster’s control plane (if using NSX ALB to provide control plane HA) load balancer VIP. This comes from one of the VIP network’s configured subnets. You can see the subnet CIDR for a particular network in the Infrastructure - Networks view of the NSX Advanced Load Balancer interface.
AVI_NAMESPACE Optional; defaults to “tkg-system-networking”. The namespace for AKO operator.
AVI_NSXT_T1LR Optional. The NSX T1 router path that you configured for your management cluster in the Avi Controller UI, under NSX Cloud. This is required when you use NSX cloud on your NSX ALB controller.
To use a different T1 for your workload clusters, modify the AKODeploymentConfig object install-ako-for-all after the management cluster is created.
AVI_PASSWORD Required. The password that you set for the Controller admin when you deployed it.
AVI_SERVICE_ENGINE_GROUP Required. Name of the Service Engine Group. For example, Default-Group.
AVI_USERNAME Required. The admin username that you set for the Controller host when you deployed it.

NSX Pod Routing

These variables configure routable-IP address workload pods, as described in Deploy a Cluster with Routable-IP Pods. All variables are strings in double-quotes, for example "true".

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
NSXT_POD_ROUTING_ENABLED Optional; defaults to “false”. Set to “true” to enable NSX routable pods with the variables below. See Deploy a Cluster with Routable-IP Pods.
NSXT_MANAGER_HOST Required if NSXT_POD_ROUTING_ENABLED= “true”.
IP address of NSX Manager.
NSXT_ROUTER_PATH Required if NSXT_POD_ROUTING_ENABLED= “true”. T1 router path shown in NSX Manager.
For username/password authentication to NSX:
NSXT_USERNAME Username for logging in to NSX Manager.
NSXT_PASSWORD Password for logging in to NSX Manager.
For authenticating to NSX using credentials and storing them in a Kubernetes secret (also set NSXT_USERNAME and NSXT_PASSWORD above):
NSXT_SECRET_NAMESPACE Optional; defaults to “kube-system”. The namespace with the secret containing NSX username and password.
NSXT_SECRET_NAME Optional; defaults to “cloud-provider-vsphere-nsxt-credentials”. The name of the secret containing NSX username and password.
For certificate authentication to NSX:
NSXT_ALLOW_UNVERIFIED_SSL Optional; defaults to false. Set this to “true” if NSX uses a self-signed certificate.
NSXT_ROOT_CA_DATA_B64 Required if NSXT_ALLOW_UNVERIFIED_SSL= “false”.
Base64-encoded Certificate Authority root certificate string that NSX uses for LDAP authentication.
NSXT_CLIENT_CERT_KEY_DATA Base64-encoded cert key file string for local client certificate.
NSXT_CLIENT_CERT_DATA Base64-encoded cert file string for local client certificate.
For remote authentication to NSX with VMware Identity Manager, on VMware Cloud (VMC):
NSXT_REMOTE_AUTH Optional; defaults to false. Set this to “true” for remote authentication to NSX with VMware Identity Manager, on VMware Cloud (VMC).
NSXT_VMC_AUTH_HOST Optional; default is empty. VMC authentication host.
NSXT_VMC_ACCESS_TOKEN Optional; default is empty. VMC authentication access token.

GPU-Enabled Clusters

These variables configure GPU-enabled workload clusters in PCI passthrough mode, as described in Deploy a GPU-Enabled Workload Cluster. All variables are strings in double-quotes, for example "true".

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
VSPHERE_CONTROL_PLANE_CUSTOM_VMX_KEYS Sets custom VMX keys on all control plane machines. Use the form Key1=Value1,Key2=Value2. See Deploy a GPU-Enabled Workload Cluster.
VSPHERE_CONTROL_PLANE_HARDWARE_VERSION The hardware version for the control plane VM with the GPU device in PCI passthrough mode. The minimum version required is 17. The format of the value is vmx-17, where the trailing digits are the hardware version of the virtual machine. For feature support on different hardware versions, see Hardware Features Available with Virtual Machine Compatibility Settings in the vSphere documentation.
VSPHERE_CONTROL_PLANE_PCI_DEVICES Configures PCI passthrough on all control plane machines. Use the format <vendor_id>:<device_id>. For example, VSPHERE_WORKER_PCI_DEVICES: “0x10DE:0x1EB8”. To find the vendor and device IDs, see Deploy a GPU-Enabled Workload Cluster.
VSPHERE_IGNORE_PCI_DEVICES_ALLOW_LIST Set to false if you are using the NVIDIA Tesla T4 GPU and true if you are using the NVIDIA V100 GPU.
VSPHERE_WORKER_CUSTOM_VMX_KEYS Sets custom VMX keys on all worker machines. Use the form Key1=Value1,Key2=Value2. For an example, see Deploy a GPU-Enabled Workload Cluster.
VSPHERE_WORKER_HARDWARE_VERSION The hardware version for the worker VM with the GPU device in PCI passthrough mode. The minimum version required is 17. The format of the value is vmx-17, where the trailing digits are the hardware version of the virtual machine. For feature support on different hardware versions, see Hardware Features Available with Virtual Machine Compatibility Settings in the vSphere documentation.
VSPHERE_WORKER_PCI_DEVICES Configures PCI passthrough on all worker machines. Use the format <vendor_id>:<device_id>. For example, VSPHERE_WORKER_PCI_DEVICES: “0x10DE:0x1EB8”. To find the vendor and device IDs, see Deploy a GPU-Enabled Workload Cluster.
WORKER_ROLLOUT_STRATEGY Optional. Configures the MachineDeployment rollout strategy. Default is RollingUpdate. If set to OnDelete, then on updates, the existing worker machines will be deleted first, before the replacement worker machines are created. It is imperative to set this to OnDelete if all the available PCI devices are in use by the worker nodes.

AWS

The variables in the table below are the options that you specify in the cluster configuration file when deploying workload clusters to AWS. Many of these options are the same for both the workload cluster and the management cluster that you use to deploy it.

For more information about the configuration files for AWS, see Management Cluster Configuration for AWS and AWS Cluster Configuration Files.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
AWS_ACCESS_KEY_ID Optional. The access key ID for your AWS account. This is one option for authenticating to AWS. See Configure AWS Account Credentials.
Only use AWS_PROFILE or a combination of AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, but not both.
AWS_CONTROL_PLANE_OS_DISK_SIZE_GIB Optional. Disk size for control plane nodes. Overrides the default disk size for VM type set by CONTROL_PLANE_MACHINE_TYPE, SIZE, or CONTROLPLANE_SIZE.
AWS_LOAD_BALANCER_SCHEME_INTERNAL Optional. For management clusters, sets the load balancer scheme to be internal, preventing access over the Internet. For workload clusters, ensures that the management cluster accesses the workload cluster’s load balancer internally when they share a VPC or are peered. See Use Internal Load Balancer for Kubernetes API Server.
AWS_NODE_AZ Required. The name of the AWS availability zone in your chosen region that you want use as the availability zone for this management cluster. Availability zone names are the same as the AWS region name, with a single lower-case letter suffix, such as a, b, c. For example, us-west-2a. To deploy a prod management cluster with three control plane nodes, you must also set AWS_NODE_AZ_1 and AWS_NODE_AZ_2. The letter suffix in each of these availability zones must be unique. For example, us-west-2a, us-west-2b, and us-west-2c.
AWS_NODE_OS_DISK_SIZE_GIB Optional. Disk size for worker nodes. Overrides the default disk size for VM type set by NODE_MACHINE_TYPE, SIZE, or WORKER_SIZE.
AWS_NODE_AZ_1 and AWS_NODE_AZ_2 Optional. Set these variables if you want to deploy a prod management cluster with three control plane nodes. Both availability zones must be in the same region as AWS_NODE_AZ. See AWS_NODE_AZ above for more information. For example, us-west-2a, ap-northeast-2b, etc.
AWS_PRIVATE_NODE_CIDR_1 Optional. If the recommended range of 10.0.2.0/24 is not available, enter a different IP range in CIDR format. When Tanzu Kubernetes Grid deploys your management cluster, it creates this subnetwork in AWS_NODE_AZ_1. See AWS_PRIVATE_NODE_CIDR above for more information.
AWS_PRIVATE_NODE_CIDR_2 Optional. If the recommended range of 10.0.4.0/24 is not available, enter a different IP range in CIDR format. When Tanzu Kubernetes Grid deploys your management cluster, it creates this subnetwork in AWS_NODE_AZ_2. See AWS_PRIVATE_NODE_CIDR above for more information.
AWS_PROFILE Optional. The AWS credential profile, managed by aws configure, that Tanzu Kubernetes Grid uses to access its AWS account. This is the preferred option for authenticating to AWS. See Configure AWS Account Credentials.
Only use AWS_PROFILE or a combination of AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, but not both.
AWS_PUBLIC_NODE_CIDR_1 Optional. If the recommended range of 10.0.3.0/24 is not available, enter a different IP range in CIDR format. When Tanzu Kubernetes Grid deploys your management cluster, it creates this subnetwork in AWS_NODE_AZ_1. See AWS_PUBLIC_NODE_CIDR above for more information.
AWS_PUBLIC_NODE_CIDR_2 Optional. If the recommended range of 10.0.5.0/24 is not available, enter a different IP range in CIDR format. When Tanzu Kubernetes Grid deploys your management cluster, it creates this subnetwork in AWS_NODE_AZ_2. See AWS_PUBLIC_NODE_CIDR above for more information.
AWS_PRIVATE_SUBNET_ID Optional. If you set AWS_VPC_ID to use an existing VPC, enter the ID of a private subnet that already exists in AWS_NODE_AZ. This setting is optional. If you do not set it, tanzu management-cluster create identifies the private subnet automatically. To deploy a prod management cluster with three control plane nodes, you must also set AWS_PRIVATE_SUBNET_ID_1 and AWS_PRIVATE_SUBNET_ID_2.
AWS_PRIVATE_SUBNET_ID_1 Optional. The ID of a private subnet that exists in AWS_NODE_AZ_1. If you do not set this variable, tanzu management-cluster create identifies the private subnet automatically. See AWS_PRIVATE_SUBNET_ID above for more information.
AWS_PRIVATE_SUBNET_ID_2 Optional. The ID of a private subnet that exists in AWS_NODE_AZ_2. If you do not set this variable, tanzu management-cluster create identifies the private subnet automatically. See AWS_PRIVATE_SUBNET_ID above for more information.
AWS_PUBLIC_SUBNET_ID Optional. If you set AWS_VPC_ID to use an existing VPC, enter the ID of a public subnet that already exists in AWS_NODE_AZ. This setting is optional. If you do not set it, tanzu management-cluster create identifies the public subnet automatically. To deploy a prod management cluster with three control plane nodes, you must also set AWS_PUBLIC_SUBNET_ID_1 and AWS_PUBLIC_SUBNET_ID_2.
AWS_PUBLIC_SUBNET_ID_1 Optional. The ID of a public subnet that exists in AWS_NODE_AZ_1. If you do not set this variable, tanzu management-cluster create identifies the public subnet automatically. See AWS_PUBLIC_SUBNET_ID above for more information.
AWS_PUBLIC_SUBNET_ID_2 Optional. The ID of a public subnet that exists in AWS_NODE_AZ_2. If you do not set this variable, tanzu management-cluster create identifies the public subnet automatically. See AWS_PUBLIC_SUBNET_ID above for more information.
AWS_REGION Required. The name of the AWS region in which to deploy the cluster. For example, us-west-2. You can also specify the us-gov-east and us-gov-west regions in AWS GovCloud. If you have already set a different region as an environment variable, for example, in Deploy Management Clusters to AWS, you must unset that environment variable. For example, us-west-2, ap-northeast-2, etc.
AWS_SECRET_ACCESS_KEY Optional. The secret access key for your AWS account. This is one option for authenticating to AWS. See Configure AWS Account Credentials.
Only use AWS_PROFILE or a combination of AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, but not both.
AWS_SECURITY_GROUP_APISERVER_LB Optional. Custom security groups with custom rules to apply to the cluster. See Configure Custom Security Groups.
AWS_SECURITY_GROUP_BASTION
AWS_SECURITY_GROUP_CONTROLPLANE
AWS_SECURITY_GROUP_APISERVER_LB
AWS_SECURITY_GROUP_NODE
AWS_SESSION_TOKEN Optional. Provide the AWS session token granted to your account if you are required to use a temporary access key. For more information about using temporary access keys, see Understanding and getting your AWS credentials. provide the session token for your AWS account. Alternatively, you can specify account credentials as a local environment variables or in your AWS default credential provider chain.
AWS_SSH_KEY_NAME Required. The name of the SSH private key that you registered with your AWS account.
AWS_VPC_ID Optional. To use a VPC that already exists in your selected AWS region, enter the ID of the VPC and then set AWS_PUBLIC_SUBNET_ID and AWS_PRIVATE_SUBNET_ID.
BASTION_HOST_ENABLED Optional. By default this option is set to "true" in the global Tanzu Kubernetes Grid configuration. Specify "true" to deploy an AWS bastion host or "false" to reuse an existing bastion host. If no bastion host exists in your availability zone(s) and you set AWS_VPC_ID to use an existing VPC, set BASTION_HOST_ENABLED to "true".
CONTROL_PLANE_MACHINE_TYPE Required if cloud-agnostic SIZE or CONTROLPLANE_SIZE are not set. The Amazon EC2 instance type to use for cluster control plane nodes, for example t3.small or m5.large.
NODE_MACHINE_TYPE Required if cloud-agnostic SIZE or WORKER_SIZE are not set. The Amazon EC2 instance type to use for cluster worker nodes, for example t3.small or m5.large.

Microsoft Azure

The variables in the table below are the options that you specify in the cluster configuration file when deploying workload clusters to Azure. Many of these options are the same for both the workload cluster and the management cluster that you use to deploy it.

For more information about the configuration files for Azure, see Management Cluster Configuration for Azure and Azure Cluster Configuration Files.

Variable Can be set in… Description
Management cluster YAML Workload cluster YAML
AZURE_CLIENT_ID Required. The client ID of the app for Tanzu Kubernetes Grid that you registered with Azure.
AZURE_CLIENT_SECRET Required. Your Azure client secret from Register a Tanzu Kubernetes Grid App on Azure.
AZURE_CUSTOM_TAGS Optional. Comma-separated list of tags to apply to Azure resources created for the cluster. A tag is a key-value pair, for example, “foo=bar, plan=prod”. For more information about tagging Azure resources, see Use tags to organize your Azure resources and management hierarchy and Tag support for Azure resources in the Microsoft Azure documentation.
AZURE_ENVIRONMENT Optional; defaults to AzurePublicCloud. Supported clouds are AzurePublicCloud, AzureChinaCloud, AzureGermanCloud, AzureUSGovernmentCloud.
AZURE_IDENTITY_NAME Optional, set if you want to use clusters on different Azure accounts. The name to use for the AzureClusterIdentity that you create to use clusters on different Azure accounts. For more information, see Clusters on Different Azure Accounts in Deploy Workload Clusters to Azure.
AZURE_IDENTITY_NAMESPACE Optional, set if you want to use clusters on different Azure accounts. The namespace for your AzureClusterIdentity that you create to use clusters on different Azure accounts. For more information, see Clusters on Different Azure Accounts in Deploy Workload Clusters to Azure.
AZURE_LOCATION Required. The name of the Azure region in which to deploy the cluster. For example, eastus.
AZURE_RESOURCE_GROUP Optional; defaults to the CLUSTER_NAME setting. The name of the Azure resource group that you want to use for the cluster. Must be unique to each cluster. AZURE_RESOURCE_GROUP and AZURE_VNET_RESOURCE_GROUP are the same by default.
AZURE_SSH_PUBLIC_KEY_B64 Required. Your SSH public key, created in Deploy a Management Cluster to Microsoft Azure, converted into base64 with newlines removed. For example, c3NoLXJzYSBB […] vdGFsLmlv.
AZURE_SUBSCRIPTION_ID Required. The subscription ID of your Azure subscription.
AZURE_TENANT_ID Required. The tenant ID of your Azure account.
Networking
AZURE_CONTROL_PLANE_OUTBOUND_LB_FRONTEND_IP_COUNT Optional; defaults to 1. Set this to add multiple frontend IP addresses to the control plane load balancer in environments with high numbers of expected outbound connections.
AZURE_ENABLE_ACCELERATED_NETWORKING Optional; defaults to true. Set to false to deactivate Azure accelerated networking on VMs with more than 4 CPUs.
AZURE_ENABLE_CONTROL_PLANE_OUTBOUND_LB Optional. Set this to true if AZURE_ENABLE_PRIVATE_CLUSTER is true and you want to enable a Public IP address on the outbound load balancer for the control plane.
AZURE_ENABLE_NODE_OUTBOUND_LB Optional. Set this to true if AZURE_ENABLE_PRIVATE_CLUSTER is true and you want to enable a Public IP address on the outbound load balancer for the worker nodes.
AZURE_ENABLE_PRIVATE_CLUSTER Optional. Set this to true to configure the cluster as private and use an Azure Internal Load Balancer (ILB) for its incoming traffic. For more information, see Azure Private Clusters.
AZURE_FRONTEND_PRIVATE_IP Optional. Set this if AZURE_ENABLE_PRIVATE_CLUSTER is true and you want to override the default internal load balancer address of 10.0.0.100.
AZURE_NODE_OUTBOUND_LB_FRONTEND_IP_COUNT Optional; defaults to 1. Set this to add multiple frontend IP addresses to the worker node load balancer in environments with high numbers of expected outbound connections.
AZURE_NODE_OUTBOUND_LB_IDLE_TIMEOUT_IN_MINUTES Optional; defaults to 4. Set this to specify the number of minutes outbound TCP connections are kept open via the worker node outbound load balancer.
AZURE_VNET_CIDR Optional, set if you want to deploy the cluster to a new VNet and subnets and override the default values. By default, AZURE_VNET_CIDR is set to 10.0.0.0/16, AZURE_CONTROL_PLANE_SUBNET_CIDR to 10.0.0.0/24, and AZURE_NODE_SUBNET_CIDR to 10.0.1.0/24.
AZURE_CONTROL_PLANE_SUBNET_CIDR
AZURE_NODE_SUBNET_CIDR
AZURE_VNET_NAME Optional, set if you want to deploy the cluster to an existing VNet and subnets or assign names to a new VNet and subnets.
AZURE_CONTROL_PLANE_SUBNET_NAME
AZURE_NODE_SUBNET_NAME
AZURE_VNET_RESOURCE_GROUP Optional; defaults to the value of AZURE_RESOURCE_GROUP.
Control Plane VMs
AZURE_CONTROL_PLANE_DATA_DISK_SIZE_GIB Optional. Size of data disk and OS disk, as described in Azure documentation Disk roles, for control plane VMs, in GB. Examples: 128, 256. Control plane nodes are always provisioned with a data disk.
AZURE_CONTROL_PLANE_OS_DISK_SIZE_GIB
AZURE_CONTROL_PLANE_MACHINE_TYPE Optional; defaults to Standard_D2s_v3. An Azure VM size for the control plane node VMs, chosen to fit expected workloads. The minimum requirement for Azure instance types is 2 CPUs and 8 GB memory. For possible values, see the Tanzu Kubernetes Grid installer interface.
AZURE_CONTROL_PLANE_OS_DISK_STORAGE_ACCOUNT_TYPE Optional. Type of Azure storage account for control plane VM disks. Example: Premium_LRS.
Worker Node VMs
AZURE_ENABLE_NODE_DATA_DISK Optional; defaults to false. Set to true to provision a data disk for each worker node VM, as described in Azure documentation Disk roles.
AZURE_NODE_DATA_DISK_SIZE_GIB Optional. Set this variable if AZURE_ENABLE_NODE_DATA_DISK is true. Size of data disk, as described in Azure documentation Disk roles, for worker VMs, in GB. Examples: 128, 256.
AZURE_NODE_OS_DISK_SIZE_GIB Optional. Size of OS disk, as described in Azure documentation Disk roles, for worker VMs, in GB. Examples: 128, 256.
AZURE_NODE_MACHINE_TYPE Optional. An Azure VM size for the worker node VMs, chosen to fit expected workloads. The default value is Standard_D2s_v3. For possible values, see the Tanzu Kubernetes Grid installer interface.
AZURE_NODE_OS_DISK_STORAGE_ACCOUNT_TYPE Optional. Set this variable if AZURE_ENABLE_NODE_DATA_DISK is true. Type of Azure storage account for worker VM disks. Example: Premium_LRS.

Configuration Value Precedence

When the Tanzu CLI creates a cluster, it reads in values for the variables listed in this topic from multiple sources. If those sources conflict, it resolves conflicts in the following order of descending precedence:

Processing layers, ordered by descending precedence Source Examples
1. Management cluster configuration variables set in the installer interface Entered in the installer interface launched by the –ui option, and written to cluster configuration files. File location defaults to ~/.config/tanzu/tkg/clusterconfigs/. Worker Node Instance Type dropdown with Standard_D2s_v3 selected
2. Cluster configuration variables set in your local environment Set in shell. export AZURE_NODE_MACHINE_TYPE=Standard_D2s_v3
3. Cluster configuration variables set in the Tanzu CLI, with tanzu config set env. Set in shell; saved in the global Tanzu CLI configuration file, ~/.config/tanzu/config.yaml. tanzu config set env.AZURE_NODE_MACHINE_TYPE Standard_D2s_v3
4. Cluster configuration variables set in the cluster configuration file Set in the file passed to the –file option of tanzu management-cluster create or tanzu cluster create. File defaults to ~/.config/tanzu/tkg/cluster-config.yaml. AZURE_NODE_MACHINE_TYPE: Standard_D2s_v3
5. Factory default configuration values Set in providers/config_default.yaml. Do not modify this file. AZURE_NODE_MACHINE_TYPE: “Standard_D2s_v3”

check-circle-line exclamation-circle-line close-line
Scroll to top icon