This topic explains how to use a flat configuration file or Kubernetes-style object spec to configure a Tanzu Kubernetes Grid (TKG) workload cluster before deploying it to vSphere with a standalone management cluster. To configure a workload cluster for deployment to vSphere with Tanzu, see vSphere with Supervisor Cluster Configuration Files.
For general information about how configure workload clusters by using configuration files and object specs, see Configuration Files and Object Specs.
To use vSphere-specific workload cluster features that require some configuration outside of the cluster’s configuration file or object spec, see Clusters on vSphere.
To configure a workload cluster before deploying it to vSphere, you create a cluster configuration file or a Kubernetes-style object spec file. When you pass either of these files to the -f
option of tanzu cluster create
, the Tanzu CLI uses the configuration information defined in the file to connect to your vSphere account and create the resources that the cluster will use. For example, you can specify standard sizes for the control plane and worker node VMs or configure the CPU, memory, and disk sizes for control plane and worker nodes explicitly. If you use custom image templates, you can identify which template to use to create node VMs.
For the full list of options that you must specify when deploying workload clusters to vSphere, see the Configuration File Variable Reference.
To create a cluster configuration file, you can use the template in Workload Cluster Template below. After creating the configuration file, proceed to Create Workload Clusters.
The template below includes all of the options that are relevant to deploying workload clusters on vSphere. You can copy this template and update it to deploy workload clusters to vSphere.
Mandatory options are uncommented. Optional settings are commented out. Default values are included where applicable.
With the exception of the options described in the sections below the template, the way in which you configure the variables for workload clusters that are specific to vSphere is identical for both management clusters and workload clusters. For information about how to configure the variables, see Deploy Management Clusters from a Configuration File and Management Cluster Configuration for vSphere.
#! ---------------------------------------------------------------------
#! Basic cluster creation configuration
#! ---------------------------------------------------------------------
# CLUSTER_NAME:
CLUSTER_PLAN: dev
NAMESPACE: default
# CLUSTER_API_SERVER_PORT: # For deployments without NSX Advanced Load Balancer
CNI: antrea
#! ---------------------------------------------------------------------
#! Node configuration
#! ---------------------------------------------------------------------
# SIZE:
# CONTROLPLANE_SIZE:
# WORKER_SIZE:
# VSPHERE_NUM_CPUS: 2
# VSPHERE_DISK_GIB: 40
# VSPHERE_MEM_MIB: 4096
# VSPHERE_CONTROL_PLANE_NUM_CPUS: 2
# VSPHERE_CONTROL_PLANE_DISK_GIB: 40
# VSPHERE_CONTROL_PLANE_MEM_MIB: 8192
# VSPHERE_WORKER_NUM_CPUS: 2
# VSPHERE_WORKER_DISK_GIB: 40
# VSPHERE_WORKER_MEM_MIB: 4096
# CONTROL_PLANE_MACHINE_COUNT:
# WORKER_MACHINE_COUNT:
# WORKER_MACHINE_COUNT_0:
# WORKER_MACHINE_COUNT_1:
# WORKER_MACHINE_COUNT_2:
#! ---------------------------------------------------------------------
#! vSphere configuration
#! ---------------------------------------------------------------------
#VSPHERE_CLONE_MODE: "fullClone"
VSPHERE_NETWORK: VM Network
# VSPHERE_TEMPLATE:
# VSPHERE_TEMPLATE_MOID:
# IS_WINDOWS_WORKLOAD_CLUSTER: false
# VIP_NETWORK_INTERFACE: "eth0"
VSPHERE_SSH_AUTHORIZED_KEY:
VSPHERE_USERNAME:
VSPHERE_PASSWORD:
# VSPHERE_REGION:
# VSPHERE_ZONE:
# VSPHERE_AZ_0:
# VSPHERE_AZ_1:
# VSPHERE_AZ_2:
# VSPHERE_AZ_CONTROL_PLANE_MATCHING_LABELS:
VSPHERE_SERVER:
VSPHERE_DATACENTER:
VSPHERE_RESOURCE_POOL:
VSPHERE_DATASTORE:
VSPHERE_FOLDER:
# VSPHERE_MTU:
# VSPHERE_STORAGE_POLICY_ID
# VSPHERE_WORKER_PCI_DEVICES:
# VSPHERE_CONTROL_PLANE_PCI_DEVICES:
# VSPHERE_IGNORE_PCI_DEVICES_ALLOW_LIST:
# VSPHERE_CONTROL_PLANE_CUSTOM_VMX_KEYS:
# VSPHERE_WORKER_CUSTOM_VMX_KEYS:
# WORKER_ROLLOUT_STRATEGY: "RollingUpdate"
# VSPHERE_CONTROL_PLANE_HARDWARE_VERSION:
# VSPHERE_WORKER_HARDWARE_VERSION:
VSPHERE_TLS_THUMBPRINT:
VSPHERE_INSECURE: false
# VSPHERE_CONTROL_PLANE_ENDPOINT: # Required for Kube-Vip
# VSPHERE_CONTROL_PLANE_ENDPOINT_PORT: 6443
# VSPHERE_ADDITIONAL_FQDN:
AVI_CONTROL_PLANE_HA_PROVIDER: false
#! ---------------------------------------------------------------------
#! NSX specific configuration for enabling NSX routable pods
#! ---------------------------------------------------------------------
# NSXT_POD_ROUTING_ENABLED: false
# NSXT_ROUTER_PATH: ""
# NSXT_USERNAME: ""
# NSXT_PASSWORD: ""
# NSXT_MANAGER_HOST: ""
# NSXT_ALLOW_UNVERIFIED_SSL: false
# NSXT_REMOTE_AUTH: false
# NSXT_VMC_ACCESS_TOKEN: ""
# NSXT_VMC_AUTH_HOST: ""
# NSXT_CLIENT_CERT_KEY_DATA: ""
# NSXT_CLIENT_CERT_DATA: ""
# NSXT_ROOT_CA_DATA: ""
# NSXT_SECRET_NAME: "cloud-provider-vsphere-nsxt-credentials"
# NSXT_SECRET_NAMESPACE: "kube-system"
#! ---------------------------------------------------------------------
#! Common configuration
#! ---------------------------------------------------------------------
# TKG_CUSTOM_IMAGE_REPOSITORY: ""
# TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: false
# TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: ""
# TKG_HTTP_PROXY: ""
# TKG_HTTPS_PROXY: ""
# TKG_NO_PROXY: ""
# TKG_PROXY_CA_CERT: ""
ENABLE_AUDIT_LOGGING: false
ENABLE_DEFAULT_STORAGE_CLASS: true
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13
# OS_NAME: ""
# OS_VERSION: ""
# OS_ARCH: ""
#! ---------------------------------------------------------------------
#! Autoscaler configuration
#! ---------------------------------------------------------------------
ENABLE_AUTOSCALER: false
# AUTOSCALER_MAX_NODES_TOTAL: "0"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_ADD: "10m"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_DELETE: "10s"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_FAILURE: "3m"
# AUTOSCALER_SCALE_DOWN_UNNEEDED_TIME: "10m"
# AUTOSCALER_MAX_NODE_PROVISION_TIME: "15m"
# AUTOSCALER_MIN_SIZE_0:
# AUTOSCALER_MAX_SIZE_0:
# AUTOSCALER_MIN_SIZE_1:
# AUTOSCALER_MAX_SIZE_1:
# AUTOSCALER_MIN_SIZE_2:
# AUTOSCALER_MAX_SIZE_2:
#! ---------------------------------------------------------------------
#! Antrea CNI configuration
#! ---------------------------------------------------------------------
# ANTREA_NO_SNAT: false
# ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD: false
# ANTREA_TRAFFIC_ENCAP_MODE: "encap"
# ANTREA_EGRESS_EXCEPT_CIDRS: ""
# ANTREA_NODEPORTLOCAL_ENABLED: true
# ANTREA_NODEPORTLOCAL_PORTRANGE: 61000-62000
# ANTREA_PROXY: true
# ANTREA_PROXY_ALL: false
# ANTREA_PROXY_NODEPORT_ADDRS: ""
# ANTREA_PROXY_SKIP_SERVICES: ""
# ANTREA_PROXY_LOAD_BALANCER_IPS: false
# ANTREA_FLOWEXPORTER_COLLECTOR_ADDRESS: "flow-aggregator.flow-aggregator.svc:4739:tls"
# ANTREA_FLOWEXPORTER_POLL_INTERVAL: "5s"
# ANTREA_FLOWEXPORTER_ACTIVE_TIMEOUT: "30s"
# ANTREA_FLOWEXPORTER_IDLE_TIMEOUT: "15s"
# ANTREA_KUBE_APISERVER_OVERRIDE:
# ANTREA_TRANSPORT_INTERFACE:
# ANTREA_TRANSPORT_INTERFACE_CIDRS: ""
# ANTREA_MULTICAST_INTERFACES: ""
# ANTREA_MULTICAST_IGMPQUERY_INTERVAL: "125s"
# ANTREA_TUNNEL_TYPE: geneve
# ANTREA_ENABLE_USAGE_REPORTING: false
# ANTREA_ENABLE_BRIDGING_MODE: false
# ANTREA_DISABLE_TXCHECKSUM_OFFLOAD: false
# ANTREA_DNS_SERVER_OVERRIDE: ""
# ANTREA_MULTICLUSTER_ENABLE: false
# ANTREA_MULTICLUSTER_NAMESPACE: ""
If you are deploying the cluster to vSphere and using the default Kube-Vip load balancer for cluster’s control plane API, you must specify its endpoint by setting VSPHERE_CONTROL_PLANE_ENDPOINT
. If you are using NSX Advanced Load Balancer (ALB), do not set VSPHERE_CONTROL_PLANE_ENDPOINT
unless you need the control plane endpoint to be specific address. If so, use a static address within the NSX ALB IPAM Profile’s VIP Network range that you have manually added to the Static IP pool.
No two clusters, including any management cluster and workload cluster, can have the same VSPHERE_CONTROL_PLANE_ENDPOINT
address.
You can use the Tanzu CLI to convert a cluster configuration file into a Kubernetes-style object spec file for a class-based workload cluster without deploying the cluster:
To generate an object spec file for every class-based cluster that you create with tanzu cluster create
, ensure that the auto-apply-generated-clusterclass-based-configuration
feature is set to false
in the configuration of the Tanzu CLI. This feature is set to false
by default. When auto-apply-generated-clusterclass-based-configuration
is set to false
and you run tanzu cluster create
with the --file
flag, the command converts your cluster configuration file into an object spec file and exits without creating the cluster. After reviewing the configuration, you re-run tanzu cluster create
with the object spec file generated by the Tanzu CLI. If you have updated the default configuration, to set it back to false
, run:
tanzu config set features.cluster.auto-apply-generated-clusterclass-based-configuration false
To generate an object spec file for a single cluster, pass the --dry-run
option to tanzu cluster create
and save the output to a file. Use the same options and configuration --file
that you would use if you were creating the cluster, for example:
tanzu cluster create my-cluster --file my-cluster-config.yaml --dry-run > my-cluster-spec.yaml
You can then use this object spec to deploy a cluster as described in Create a Class-Based Cluster from the Object Spec.
For an example object spec file, see Example Cluster
Object and Its Subordinate Objects.
Proceed to Create Workload Clusters. After you deploy a workload cluster to vSphere you must configure its node DHCP reservations and endpoint DNS as described in Configure Node DHCP Reservations and Endpoint DNS Record (vSphere Only).