This topic explains how to use a flat configuration file or Kubernetes-style object spec to configure a Tanzu Kubernetes Grid (TKG) workload cluster before deploying it to vSphere 8 with Tanzu. To configure a workload cluster for deployment to vSphere with a standalone management cluster, see vSphere with Standalone Management Cluster Configuration Files.
For general information about how configure workload clusters by using configuration files and object specs, see Configuration Files and Object Specs.
To use vSphere-specific workload cluster features that require some configuration outside of the cluster’s configuration file or object spec, see Clusters on vSphere.
To configure a workload cluster before deploying it to vSphere with Tanzu, you create a Kubernetes-style object spec file if you are configuring a class-based cluster or a cluster configuration file if you are configuring a TKC cluster. When you pass either of these files to the -f
option of tanzu cluster create
, the Tanzu CLI uses the configuration information defined in the file to connect to your vSphere account and create the resources that the cluster will use.
To configure:
For information about the above cluster types, see Workload Cluster Types.
To configure a workload cluster for deployment to vSphere 8 with Tanzu:
Create or adapt a Cluster
object spec. The vSphere 8 documentation has example Cluster
object specs to work from:
Set VM types, scale, and other basic cluster configurations in the topology
block of the spec file. For information about the topology
block, see Class-Based Cluster Object and Topology Structure and ClusterClass Topology Variables below.
(Optional) To customize attributes that are not settable in the Cluster
object itself, for example, one-time container interface settings in the cluster infrastructure, see Configure One-Time Infrastructure Settings.
The object properties for a Cluster
object with type tanzukubernetescluster
are as follows. The configurable settings are the ones under spec.topology
. See ClusterClass Topology Variables for the variables
that you can configure:
spec:
clusterNetwork
apiServerPort
services
cidrBlocks
pods
cidrBlocks
serviceDomain
controlPlaneEndpoint
host
port
topology
class
version
rolloutAfter
controlPlane
metadata
replicas
nodeDrainTimeout
nodeDeletionTimeout
machineHealthCheck
maxUnhealthy
nodeStartupTimeout
unhealthyConditions
workers
machineDeployments
metadata
- class
name
failureDomain
replicas
nodeDrainTimeout
nodeDeletionTimeout
machineHealthCheck
maxUnhealthy
nodeStartupTimeout
unhealthyConditions
variables
name
value
variables
name
value
These fields are set in the Cluster
object type specification: cluster_types.go.
json:
setting indicates whether the field is optional. Optional fields have the omitempty
setting.Topology
struct contains *Workers
in the type spec, so workers
is indented under topology
in the object spec.The class
and variables
options are defined in Cluster
object’s cluster class, which is set as the cluster’s spec.topology.class
value. For example, on vSphere with Tanzu, this is a ClusterClass
object named tanzukubernetescluster
, which is different from a TanzuKubernetesCluster
object, as explained in Workload Cluster Types.
Configurable variables
include vmClass
, storageClass
, proxy
, nodeLabels
, extensionCert
and many others, as listed in ClusterClass Topology Variables below. These variables configure and override settings in objects that underlie the cluster object, such as KubeAdmConfig
and Machine
objects.
The tanzukubernetescluster
cluster class, the default ClusterClass
for TKG on vSphere with Tanzu workload clusters, supports the following variables set in topology.variables
and topology.workers.machineDeployments.variables
. Variable settings specific to machine deployments, such as node pools, override global settings.
These variables configure and override settings in objects that underlie the cluster object, such as the vmClass
, storageClass
, and proxy
settings set in KubeAdmConfig
and Machine
objects. This enables users to configure a cluster completely within the Cluster
object spec without having to edit lower-level object specs:
clusterEncryptionConfigYaml
controlPlaneVolumes
defaultRegistrySecret
defaultStorageClass
extensionCert
nodePoolLabels
nodePoolTaints
nodePoolVolumes
ntp
proxy
storageClass
storageClasses
TKR_DATA
trust
user
vmClass
The following topics in the vSphere 8 documentation describe reconfiguring a running cluster by changing its storageClass
and vmClass
settings:
To create a cluster configuration file for a TKC (legacy) workload cluster on vSphere 8, you can copy an existing configuration file for a previous deployment to vSphere with Tanzu and update it. Alternatively, you can create a file from scratch by using an empty template.
To configure a workload cluster deployed by a vSphere with Tanzu Supervisor, you set variables to define the storage classes, VM classes, service domain, namespace, and other required values with which to create your cluster. The following table lists the variables that you can include in the configuration file for a TKC-based cluster. Alternatively, you can set them as local environment variables.
Required variables | ||
---|---|---|
Variable | Value type or example | Description |
INFRASTRUCTURE_PROVIDER |
tkg-service-vsphere |
Always tkg-service-vsphere for TanzuKubernetesCluster objects on vSphere with Tanzu. |
CLUSTER_PLAN |
dev , prod or a custom plan |
Sets node counts. |
CLUSTER_CIDR |
CIDR range | The CIDR range to use for pods. The recommended range is 100.96.0.0/11 . Change this value only if the recommended range is unavailable. |
SERVICE_CIDR |
The CIDR range to use for the Kubernetes services. The recommended range is 100.64.0.0/13 . Change this value only if the recommended range is unavailable. |
|
SERVICE_DOMAIN |
Domain | For example, my.example.com or cluster.local if no DNS. If you are going to assign FQDNs with the nodes, DNS lookup is required. |
CONTROL_PLANE_VM_CLASS |
A standard VM class for vSphere with Tanzu, for example guaranteed-large .See Using Virtual Machine Classes with TKG Clusters on Supervisor in the vSphere with Tanzu documentation. |
VM class for control plane nodes |
WORKER_VM_CLASS |
VM class for worker nodes | |
Optional variables | ||
Variable | Value type or example | Description |
CLUSTER_NAME |
String | Overridden by the CLUSTER-NAME argument passed to tanzu cluster create .This name must comply with DNS hostname requirements as outlined in RFC 952 and amended in RFC 1123 and must be 42 characters or less. Note: You must provide a unique name for all workload clusters across all namespaces. If you specify a cluster name that is in use in another namespace, cluster deployment fails with an error. |
NAMESPACE |
Namespace; defaults to default . |
The namespace in which to deploy the cluster. To find the namespace of the Supervisor, run kubectl get namespaces . |
CNI |
antrea or calico ; defaults to antrea |
Container networking interface for hosted workloads, either Antrea or Calico. |
CONTROL_PLANE_MACHINE_COUNT |
Integer; CONTROL_PLANE_MACHINE_COUNT must be an odd number.Defaults to 1 for dev and 3 for prod , as set by CLUSTER_PLAN . |
Deploy a workload cluster with more control plane nodes than the dev or prod plan default. |
WORKER_MACHINE_COUNT |
Deploy a workload cluster with more worker nodes than the dev or prod plan default. |
|
STORAGE_CLASSES |
Empty string ““ lets clusters use any storage classes in the namespace; or comma-separated list of values from kubectl get storageclasses , for example, “SC-1,SC-2,SC-3” . |
Storage classes available for node customization. |
DEFAULT_STORAGE_CLASS |
Empty string ”” for no default or value from CLI, as above. |
Default storage class for control plane or workers. |
CONTROL_PLANE_STORAGE_CLASS |
Value returned from kubectl get storageclasses |
Default storage class for control plane nodes. |
WORKER_STORAGE_CLASS |
Default storage class for worker nodes. | |
NODE_POOL_0_NAME |
String | Node pool name, labels, and taints. A TanzuKubernetesCluster can only have one node pool. |
NODE_POOL_0_LABELS |
JSON list of strings, for example, [“label1”, “label2”] |
|
NODE_POOL_0_TAINTS |
JSON list of key-value pair strings, for example, [{“key1”: “value1”}, {“key2”: “value2”}] |
You can set the variables above by doing either of the following:
Include them in the cluster configuration file passed to the Tanzu CLI --file
option. For example:
CONTROL_PLANE_VM_CLASS: guaranteed-large
From command line, set them as local environment variables by running export
(on Linux and macOS) or SET
(on Windows) on the command line. For example:
export CONTROL_PLANE_VM_CLASS=guaranteed-large
NoteIf you want to configure unique proxy settings for a workload cluster, you can set
TKG_HTTP_PROXY
,TKG_HTTPS_PROXY
, andNO_PROXY
as environment variables and then use the Tanzu CLI to create the cluster. These variables take precedence over your existing proxy configuration in vSphere with Tanzu.
Proceed to Create Workload Clusters. After you deploy a workload cluster to vSphere you must configure its node DHCP reservations and endpoint DNS as described in Configure Node DHCP Reservations and Endpoint DNS Record (vSphere Only).