This topic explains where Tanzu Kubernetes (workload) cluster plan configuration values come from, and what is the order of precedence for their multiple sources. It also explains how you can customize the dev
and prod
plans for workload clusters on each cloud infrastructure, and how you can use ytt
overlays to customize cluster plans and clusters, and create new custom plans, while preserving original configuration code.
When the tanzu
CLI creates a cluster, it combines configuration values from the following:
~/.config/tanzu/tkg/cluster-config.yaml
or other file passed to the CLI --file
option~/.config/tanzu/tkg/providers
, as described in Plan Configuration Files below.~/.config/tanzu/tkg/providers
Live input applies configuration values that are unique to each invocation, environment variables persist them over a terminal session, and configuration files and overlays persist them indefinitely. You can customize clusters through any of these sources, with recommendations and caveats described below.
See Configuration Value Precedence for how the tanzu
CLI derives specific cluster configuration values from these multiple sources where they may conflict.
The ~/.config/tanzu/tkg/providers
directory contains workload cluster plan configuration files in the following subdirectories, based on the cloud infrastructure that deploys the clusters:
Clusters deployed by… | ~/.config/tanzu/tkg/providers Directory |
---|---|
Management cluster on vSphere | /infrastructure-vsphere |
vSphere with Tanzu Supervisor Cluster | /infrastructure-tkg-service-vsphere |
Management cluster on AWS | /infrastructure-aws |
Management cluster on Azure | /infrastructure-azure |
These plan configuration files are named cluster-template-definition-PLAN.yaml
. The configuration values for each plan come from these files and from the files that they list under spec.paths
:
tanzu
CLIspec.paths
listTo customize cluster plans via YAML, you edit files under ~/.config/tanzu/tkg/providers/
, but you should avoid changing other files.
Files to Edit
Workload cluster plan configuration file paths follow the form ~/.config/tanzu/tkg/providers/infrastructure-INFRASTRUCTURE/VERSION/cluster-template-definition-PLAN.yaml
, where:
INFRASTRUCTURE
is vsphere
, aws
, or azure
.VERSION
is the version of the Cluster API Provider module that the configuration uses.PLAN
is dev
, prod
, or a custom plan as created in the New nginx
Workload Plan example.Each plan configuration file has a spec.paths
section that lists source files and ytt
directories that configure the cluster plan. For example:
spec:
paths:
- path: providers/infrastructure-aws/v0.5.5/ytt
- path: providers/infrastructure-aws/ytt
- path: providers/ytt
- path: bom
filemark: text-plain
- path: providers/config_default.yaml
These files are processed in the order listed. If the same configuration field is set in multiple files, the last-processed setting is the one that the tanzu
CLI uses.
To customize your cluster configuration, you can:
spec.paths
list.
ytt
overlay files as described in ytt
Overlays below.
ytt
.Files to Leave Alone
VMware discourages changing the following files under ~/.config/tanzu/tkg/providers
, except as directed:
base-template.yaml
files, in ytt
directories
overlay.yaml
file in the same ytt
directory.~/.config/tanzu/tkg/providers/config_default.yaml
- Append only
User Customizations
section at the end.--file
option of tanzu cluster create
and tanzu mc create
.~/.config/tanzu/tkg/providers/config.yaml
tanzu
CLI uses this file as a reference for all providers present in the /providers
directory, and their default versions.ytt
OverlaysTanzu Kubernetes Grid supports customizing workload cluster configurations by adding or modifying configuration files directly, but using ytt
overlays instead lets you customize configurations at different scopes and manage multiple, modular configuration files, without destructively editing upstream and inherited configuration values.
Limitations:
ytt
overlays to modify workload clusters. Using ytt
overlays to modify management clusters is not supported.For more information, see Clusters and Cluster Plans in Customizing Clusters, Plans, and Packages with ytt Overlays.
The following examples show how to use configuration overlay files to customize workload clusters and create a new cluster plan.
For an overlay that customizes cluster certificates, see Trust Custom CA Certificates on Cluster Nodes in the Manage Cluster Secrets and Certificates topic.
This example adds one or more custom nameservers to worker and control plane nodes in Tanzu Kubernetes Grid clusters on vSphere. It disables DNS resolution from DHCP so that the custom nameservers take precedence.
Two overlay files apply to control plane nodes, and the other two apply to worker nodes. You add all four files into your ~/.config/tanzu/tkg/providers/infrastructure-vsphere/ytt/
directory.
The last line of each overlay-dns
file sets the nameserver addresses. The code below shows a single nameserver, but you can specify multiple nameservers as a list, for example nameservers: ["1.2.3.4","5.6.7.8"]
.
File vsphere-overlay-dns-control-plane.yaml
:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"VSphereMachineTemplate", "metadata": {"name": data.values.CLUSTER_NAME+"-control-plane"}})
---
spec:
template:
spec:
network:
devices:
#@overlay/match by=overlay.all, expects="1+"
-
#@overlay/match missing_ok=True
nameservers: ["8.8.8.8"]
File vsphere-overlay-dhcp-control-plane.yaml
:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
spec:
kubeadmConfigSpec:
preKubeadmCommands:
#! disable dns from being emitted by dhcp client
#@overlay/append
- echo '[DHCPv4]' >> /etc/systemd/network/10-id0.network
#@overlay/append
- echo 'UseDNS=no' >> /etc/systemd/network/10-id0.network
#@overlay/append
- '/usr/bin/systemctl restart systemd-networkd 2>/dev/null'
File vsphere-overlay-dns-workers.yaml
:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"VSphereMachineTemplate", "metadata": {"name": data.values.CLUSTER_NAME+"-worker"}})
---
spec:
template:
spec:
network:
devices:
#@overlay/match by=overlay.all, expects="1+"
-
#@overlay/match missing_ok=True
nameservers: ["8.8.8.8"]
File vsphere-overlay-dhcp-workers.yaml
:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"}),expects="1+"
---
spec:
template:
spec:
#@overlay/match missing_ok=True
preKubeadmCommands:
#! disable dns from being emitted by dhcp client
#@overlay/append
- echo '[DHCPv4]' >> /etc/systemd/network/10-id0.network
#@overlay/append
- echo 'UseDNS=no' >> /etc/systemd/network/10-id0.network
#@overlay/append
- '/usr/bin/systemctl restart systemd-networkd 2>/dev/null'
To create workload clusters on vSphere with NSX Advanced Load Balancer that are configured with VSPHERE_CONTROL_PLANE_ENDPOINT
set to an FQDN rather than an IP address, create an overlay file in your .config/tanzu/tkg/providers/infrastructure-vsphere/ytt/
directory, such as fqdn-cert-api.yaml
, with content as follows:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"}), expects="1+"
#@overlay/match-child-defaults missing_ok=True
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: #@ "{}-control-plane".format(data.values.CLUSTER_NAME)
spec:
kubeadmConfigSpec:
clusterConfiguration:
apiServer:
certSANs:
- CONTROLPLANE-FQDN
Where CONTROLPLANE-FQDN
is the FQDN for your workload cluster control plane.
With the overlay in place, create the cluster.
After creating the cluster, follow the procedure Configure DHCP Reservations for the Control Plane Endpoint and All Nodes (vSphere Only) to make the DHCP reservation permanent.
Before creating each additional cluster with an FQDN endpoint, modify the CONTROLPLANE-FQDN
setting in the overlay as needed.
In modern Linux systems, attempts to resolve hostnames that have a domain suffix that ends in .local
can fail. This issue occurs because systemd-networkd
, the DNS resolver in most Linux distributions, attempts to resolve the .local
domain via multi-cast DNS (mDNS), not via standard DNS servers.
To work around this known issue, add a searchDomains
line with your local domain suffix at the end of the vsphere-overlay-dns-control-plane.yaml
and vsphere-overlay-dns-workers.yaml
files in the ~/.config/tanzu/tkg/providers/infrastructure-vsphere/ytt/
directory.
Example for the vsphere-overlay-dns-control-plane.yaml
file:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"VSphereMachineTemplate", "metadata": {"name": data.values.CLUSTER_NAME+"-control-plane"}})
---
spec:
template:
spec:
network:
devices:
#@overlay/match by=overlay.all, expects="1+"
-
#@overlay/match missing_ok=True
nameservers: ["8.8.8.8"]
searchDomains: ["corp.local"]
Example for the vsphere-overlay-dns-workers.yaml
file:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"VSphereMachineTemplate", "metadata": {"name": data.values.CLUSTER_NAME+"-worker"}})
---
spec:
template:
spec:
network:
devices:
#@overlay/match by=overlay.all, expects="1+"
-
#@overlay/match missing_ok=True
nameservers: ["8.8.8.8"]
searchDomains: ["corp.local"]
TLS authentication within Tanzu Kubernetes Grid clusters requires precise time synchronization. In most DHCP-based environments, you can configure synchronization using DHCP Option 42.
If you are deploying clusters in a vSphere environment that lacks DHCP Option 42, use overlay code as follows to have Tanzu Kubernetes Grid create clusters with NTP servers that maintain synchronization:
In ~/.config/tanzu/tkg/providers/infrastructure-vsphere/ytt/
directory, create a new .yaml
file or augment an existing overlay file with the following code, changing the example time.google.com
to the desired NTP server:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
spec:
kubeadmConfigSpec:
#@overlay/match missing_ok=True
ntp:
#@overlay/match missing_ok=True
enabled: true
#@overlay/match missing_ok=True
servers:
- time.google.com
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"}),expects="1+"
---
spec:
template:
spec:
#@overlay/match missing_ok=True
ntp:
#@overlay/match missing_ok=True
enabled: true
#@overlay/match missing_ok=True
servers:
- time.google.com
This overlay assigns persistent labels to cluster nodes when the cluster is created. This is useful because labels applied manually via kubectl
do not persist through node replacement.
See Extra Variables in ytt
Overlays.
In ~/.config/tanzu/tkg/providers/infrastructure-vsphere/ytt/
directory, create a new .yaml
file or augment an existing overlay file with the following code.
For control plane node labels, configure both initConfiguration
and joinConfiguration
sections so that labels are applied to the first node created and all nodes that join after:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
spec:
kubeadmConfigSpec:
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
#@overlay/match missing_ok=True
node-labels: NODE-LABELS
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
#@overlay/match missing_ok=True
node-labels: NODE-LABELS
Where NODE-LABELS
is a comma-separated list of label key/value strings that includes node-type=control-plane
, for example "examplekey1=labelvalue1,examplekey2=labelvalue2"
.
For worker node labels:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"}),expects="1+"
---
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
#@overlay/match missing_ok=True
node-labels: NODE-LABELS
Where NODE-LABELS
is a comma-separated list of label key/value strings that includes node-type=worker
, for example "examplekey1=labelvalue1,examplekey2=labelvalue2"
.
For an example overlay that disables the Bastion host for workload clusters on AWS, see Disable Bastion Server on AWS in the TKG Lab repository.
nginx
This example adds and configures a new workload cluster plan nginx
that runs an nginx server. It uses the Cluster Resource Set (CRS) to deploy the nginx server to vSphere clusters created with the vSphere Cluster API provider version v0.7.6.
In .tkg/providers/infrastructure-vsphere/v0.7.6/
, add a new file cluster-template-definition-nginx.yaml
with contents identical to the cluster-template-definition-dev.yaml
and cluster-template-definition-prod.yaml
files:
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TemplateDefinition
spec:
paths:
- path: providers/infrastructure-vsphere/v0.7.6/ytt
- path: providers/infrastructure-vsphere/ytt
- path: providers/ytt
- path: bom
filemark: text-plain
- path: providers/config_default.yaml
The presence of this file creates a new plan, and the tanzu
CLI parses its filename to create the option nginx
to pass to tanzu cluster create --plan
.
In ~/.config/tanzu/tkg/providers/ytt/04_user_customizations/
, create a new file deploy_service.yaml
containing:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@ load("@ytt:yaml", "yaml")
#@ def nginx_deployment():
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
#@ end
#@ if data.values.TKG_CLUSTER_ROLE == "workload" and data.values.CLUSTER_PLAN == "nginx":
---
apiVersion: addons.cluster.x-k8s.io/v1beta1
kind: ClusterResourceSet
metadata:
name: #@ "{}-nginx-deployment".format(data.values.CLUSTER_NAME)
labels:
cluster.x-k8s.io/cluster-name: #@ data.values.CLUSTER_NAME
spec:
strategy: "ApplyOnce"
clusterSelector:
matchLabels:
tkg.tanzu.vmware.com/cluster-name: #@ data.values.CLUSTER_NAME
resources:
- name: #@ "{}-nginx-deployment".format(data.values.CLUSTER_NAME)
kind: ConfigMap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: #@ "{}-nginx-deployment".format(data.values.CLUSTER_NAME)
type: addons.cluster.x-k8s.io/resource-set
stringData:
value: #@ yaml.encode(nginx_deployment())
#@ end
In this file, the conditional #@ if data.values.TKG_CLUSTER_ROLE == "workload" and data.values.CLUSTER_PLAN == "nginx":
applies the overlay that follows to workload clusters with the plan nginx
.
If the 04_user_customizations
directory does not already exist under the top-level ytt
directory, create it.
ytt
OverlaysOverlays apply their configurations to all newly-created clusters. To customize clusters individually with different overlay settings, you can combine overlays with custom variables that you add to the cluster configuration.
This example shows how to use extra cluster configuration variables to set custom node labels for different clusters.
Note: After upgrading the Tanzu CLI, you need to re-apply these changes to your new ~/.config/tanzu/tkg/providers
directory. The previous version will be renamed as a timestamped backup.
Adding a WORKER_NODE_LABELS
variable to the default configuration and cluster configuration files enables new clusters to be created with different worker node labels.
Edit ~/.config/tanzu/tkg/providers/config_default.yaml
and add the custom variable default at the bottom:
#! ---------------------------------------------------------------------
#! Custom variables
#! ---------------------------------------------------------------------
WORKER_NODE_LABELS: ""
Setting this default prevents unwanted labels from being added to a cluster if its configuration file lacks this variable.
Add a line near the end of ~/.config/tanzu/tkg/providers/ytt/lib/config_variable_association.star
, above the final closing bracket, that associates the new variable with a provider type.
"WORKER_NODE_LABELS": ["vsphere", "aws", "azure"],
}
end
In ~/.config/tanzu/tkg/providers/infrastructure-vsphere/ytt/
directory, create a new .yaml
file or augment an existing overlay file with the following code, which adds the WORKER_NODE_LABELS
variable as a data value:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"}),expects="1+"
---
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
#@overlay/match missing_ok=True
node-labels: #@ data.values.WORKER_NODE_LABELS
For any new workload cluster, you can now set WORKER_NODE_LABEL
in its cluster configuration variable file, to apply its value as a label to every cluster node.
WORKER_NODE_LABELS: "workload-classification=production"
ytt
overlays only apply to new workload clusters created using the Tanzu CLI. To modify the resources of a cluster that already exists, you need to modify them in the management cluster and push them out to the workload cluster as described below.
This procedure gives existing clusters the same modification that the Configuring NTP without DHCP Option 42 (vSphere) overlay applies to new clusters.
Modifying the NTP settings on an existing cluster requires:
KubeadmConfigTemplate
resource to reflect the new settingsMachineDeployment
for the worker nodes to point to the new resourceKubeadmControlPlane
resource to update the control plane nodes
To modify NTP settings on an existing cluster, run the following from the management cluster, and from the namespace containing the cluster to be modified, named cluster1
in this example:
Create a new KubeadmConfigTemplate
resource and update the MachineDeployment
for each KubeadmConfigTemplate
/ MachineDeployment
pair. For a prod
plan cluster, you do this three times:
Create the new KubeadmConfigTemplate
resource for the worker nodes.
Find the existing KubeadmConfigTemplate
and export it to a yaml
file for editing.
kubectl get KubeadmConfigTemplate
kubectl get KubeadmConfigTemplate cluster1-md-0 -o yaml > cluster1-md-0.yaml
Edit the exported file by adding an ntp
section underneath the existing spec.template.spec
section, and appending -v1
to the name
field under metadata
, assuming this is the first update:
metadata:
...
name: cluster1-md-0-v1 # from cluster1-md-0
...
kubeadmConfigSpec:
ntp:
enabled: true
servers:
- time.google.com
Apply the updated yaml
file to create the new resource.
kubectl apply -f cluster1-md-0.yaml
Update the MachineDeployment
resource to point to the newly-created KubeadmConfigTemplate
resource.
Find and edit the existing MachineDeployment
resource for the cluster.
kubectl get MachineDeployment
kubectl edit MachineDeployment cluster1-md-0
Edit the spec.template.spec.bootstrap.configRef.name
value the new name of the KubeadmConfigTemplate
resource.
spec:
template:
...
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: cluster1-md-0-v1 # from cluster1-md-0
Save and exit the file, which triggers re-creation of the worker nodes.
Edit the KubeadmControlPlane
resource for the control plane nodes to include the NTP servers.
Find and edit the existing KubeadmControlPlane
resource for the cluster.
kubectl get KubeadmControlPlane
kubectl edit KubeadmControlPlane cluster1-control-plane
Edit the spec.kubeadmConfigSpec
section by adding a new ntp
section underneath. Save and exit the file.
kubeadmConfigSpec:
ntp:
enabled: true
servers:
- time.google.com