This topic describes how to use ytt overlays to configure legacy TKC and plan-based workload clusters deployed by Tanzu Kubernetes Grid (TKG), to set cluster and underlying object properties that are not settable by configuration variables in a cluster configuration file, as listed in Configure a Supervisor-Deployed Cluster with a Configuration File for TKC clusters or Configuration File Variable Reference for plan-based clusters.
For information about how to download and install ytt, see Install the Carvel Tools.
ytt overlays are not needed and not supported for class-based clusters, for which configurable properties are settable at a high level within the simplified Cluster object itself. If the Tanzu CLI detects ytt on your machine when executing tanzu cluster create for a class-based cluster, it outputs an error It seems like you have done some customizations to the template overlays.
For advanced configuration of TKC and plan-based workload clusters, to set object properties that are not settable by cluster configuration variables, you can customize TKC, plan-based cluster, and cluster plan configuration files in the local ~/.config/tanzu/tkg/providers/infrastructure-tkg-service-vsphere directory of any bootstrap machine.
You can customize these configurations by adding or modifying configuration files directly, or by using ytt overlays.
Directly customizing configuration files is simpler, but if you are comfortable with ytt overlays, they let you customize configurations at different scopes and manage multiple, modular configuration files, without destructively editing upstream and inherited configuration values. ytt overlays only apply to new TKC and plan-based workload clusters created using the Tanzu CLI.
For more information about how various forms of cluster configuration work and take precedence, see About Legacy TKC and Plan-Based Cluster Configuration in About Tanzu Kubernetes Grid.
ytt Behavior and ConventionsBehaviors and conventions for ytt processing include:
Precedence: ytt traverses directories depth-first in filename alphabetical order, and overwrites duplicate settings as it proceeds. When there are duplicate definitions, the one that ytt processes last takes precedence.
Overlay Types: different ytt overlay types change or set different things:
Data values files set or modify configuration values without modifying the structures of objects. These include Bill of Materials (BoM) files and, by convention, files with data in their filenames.
Overlay files make changes to object structure definitions. By convention, these files have overlay in their filenames.
For more information about ytt, including overlay examples and an interactive validator tool, see:
ytt > Interactive PlaygroundFor TKC and plan-based clusters, the boostrap machine’s ~/.config/tanzu/tkg/providers/ directory includes ytt directories and overlay.yaml files at different levels, which lets you scope configuration settings at each level.
ytt directories. For example, ~/.config/tanzu/tkg/providers/infrastructure-tkg-service-vsphere/v1.1.0.
base-template.yaml file contains all-caps placeholders such as "${CLUSTER_NAME}" and should not be edited.overlay.yaml file is tailored to overlay values into base-template.yaml.ytt directories. For example, ~/.config/tanzu/tkg/providers/infrastructure-tkg-service-vsphere/ytt.
ytt directory, ~/.config/tanzu/tkg/providers/infrastructure-tkg-service-vsphere/ytt.
/04_user_customizations subdirectory for configurations that take precedence over any in lower-numbered ytt subdirectories.ytt OverlaysThis section contains overlays for customizing plan-based workload clusters deployed by a standalone management cluster and for creating new cluster plans.
Limitations:
ytt overlays to modify workload clusters. Using ytt overlays to modify standalone management clusters is not supported.The following examples show how to use configuration overlay files to customize workload clusters and create a new cluster plan.
For an overlay that customizes the trusted certificates in a cluster, see Configure Clusters with Multiple Trusted Registries in the Manage Cluster Secrets and Certificates topic.
This example adds one or more custom nameservers to worker and control plane nodes in legac7 Tanzu Kubernetes Grid clusters on vSphere. It deactivates DNS resolution from DHCP so that the custom nameservers take precedence.
To configure custom nameservers in a class-based cluster, use the configuration variables CONTROL_PLANE_NODE_NAMESERVERS and WORKER_NODE_NAMESERVERS.
Two overlay files apply to control plane nodes, and the other two apply to worker nodes. You add all four files into your ~/.config/tanzu/tkg/providers/infrastructure-vsphere/ytt/ directory.
Overlay files differ depending on whether your nodes are based on Ubuntu or Photon machine images, and you do not need DHCP overlay files for Ubuntu.
One line in each overlay-dns file sets the nameserver addresses. The code below shows a single nameserver, but you can specify multiple nameservers as a list, for example nameservers: ["1.2.3.4","5.6.7.8"].
File vsphere-overlay-dns-control-plane.yaml:
Ubuntu:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"VSphereMachineTemplate", "metadata": {"name": data.values.CLUSTER_NAME+"-control-plane"}})
---
spec:
template:
spec:
network:
devices:
#@overlay/match by=overlay.all, expects="1+"
-
#@overlay/match missing_ok=True
nameservers: ["8.8.8.8"]
#@overlay/match missing_ok=True
dhcp4Overrides:
useDNS: false
Photon:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"VSphereMachineTemplate", "metadata": {"name": data.values.CLUSTER_NAME+"-control-plane"}})
---
spec:
template:
spec:
network:
devices:
#@overlay/match by=overlay.all, expects="1+"
-
#@overlay/match missing_ok=True
nameservers: ["8.8.8.8"]
File vsphere-overlay-dns-workers.yaml:
Ubuntu:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"VSphereMachineTemplate", "metadata": {"name": data.values.CLUSTER_NAME+"-worker"}})
---
spec:
template:
spec:
network:
devices:
#@overlay/match by=overlay.all, expects="1+"
-
#@overlay/match missing_ok=True
nameservers: ["8.8.8.8"]
#@overlay/match missing_ok=True
dhcp4Overrides:
useDNS: false
Photon:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"VSphereMachineTemplate", "metadata": {"name": data.values.CLUSTER_NAME+"-worker"}})
---
spec:
template:
spec:
network:
devices:
#@overlay/match by=overlay.all, expects="1+"
-
#@overlay/match missing_ok=True
nameservers: ["8.8.8.8"]
File vsphere-overlay-dhcp-control-plane.yaml (Photon only):
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
spec:
kubeadmConfigSpec:
preKubeadmCommands:
#! disable dns from being emitted by dhcp client
#@overlay/append
- echo '[DHCPv4]' >> /etc/systemd/network/10-cloud-init-eth0.network
#@overlay/append
- echo 'UseDNS=false' >> /etc/systemd/network/10-cloud-init-eth0.network
#@overlay/append
- '/usr/bin/systemctl restart systemd-networkd 2>/dev/null'
File vsphere-overlay-dhcp-workers.yaml (Photon only):
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"}),expects="1+"
---
spec:
template:
spec:
#@overlay/match missing_ok=True
preKubeadmCommands:
#! disable dns from being emitted by dhcp client
#@overlay/append
- echo '[DHCPv4]' >> /etc/systemd/network/10-cloud-init-eth0.network
#@overlay/append
- echo 'UseDNS=false' >> /etc/systemd/network/10-cloud-init-eth0.network
#@overlay/append
- '/usr/bin/systemctl restart systemd-networkd 2>/dev/null'
To create workload clusters on vSphere with NSX Advanced Load Balancer that are configured with VSPHERE_CONTROL_PLANE_ENDPOINT set to an FQDN rather than an IP address, create an overlay file in your .config/tanzu/tkg/providers/infrastructure-vsphere/ytt/ directory, such as fqdn-cert-api.yaml, with content as follows:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"}), expects="1+"
#@overlay/match-child-defaults missing_ok=True
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: #@ "{}-control-plane".format(data.values.CLUSTER_NAME)
spec:
kubeadmConfigSpec:
clusterConfiguration:
apiServer:
certSANs:
- CONTROLPLANE-FQDN
Where CONTROLPLANE-FQDN is the FQDN for your workload cluster control plane.
With the overlay in place, create the cluster.
After creating the cluster, follow the procedure Configure Node DHCP Reservations and Endpoint DNS Record (vSphere Only) to create a DNS record.
Before creating each additional cluster with an FQDN endpoint, modify the CONTROLPLANE-FQDN setting in the overlay as needed.
In modern Linux systems, attempts to resolve hostnames that have a domain suffix that ends in .local can fail. This issue occurs because systemd-networkd, the DNS resolver in most Linux distributions, attempts to resolve the .local domain via multi-cast DNS (mDNS), not via standard DNS servers.
To configure .local domain resolution in a class-based cluster, use the configuration variables CONTROL_PLANE_NODE_SEARCH_DOMAINS and WORKER_NODE_SEARCH_DOMAINS.
To work around this known issue in legacy clusters, add a searchDomains line with your local domain suffix at the end of the vsphere-overlay-dns-control-plane.yaml and vsphere-overlay-dns-workers.yaml files in the ~/.config/tanzu/tkg/providers/infrastructure-vsphere/ytt/ directory.
Example for the vsphere-overlay-dns-control-plane.yaml file:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"VSphereMachineTemplate", "metadata": {"name": data.values.CLUSTER_NAME+"-control-plane"}})
---
spec:
template:
spec:
network:
devices:
#@overlay/match by=overlay.all, expects="1+"
-
#@overlay/match missing_ok=True
nameservers: ["8.8.8.8"]
searchDomains: ["corp.local"]
Example for the vsphere-overlay-dns-workers.yaml file:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"VSphereMachineTemplate", "metadata": {"name": data.values.CLUSTER_NAME+"-worker"}})
---
spec:
template:
spec:
network:
devices:
#@overlay/match by=overlay.all, expects="1+"
-
#@overlay/match missing_ok=True
nameservers: ["8.8.8.8"]
searchDomains: ["corp.local"]
TLS authentication within Tanzu Kubernetes Grid clusters requires precise time synchronization. In most DHCP-based environments, you can configure synchronization using DHCP Option 42.
If you are deploying legacy clusters in a vSphere environment that lacks DHCP Option 42, use overlay code as follows to have Tanzu Kubernetes Grid create clusters with NTP servers that maintain synchronization
To configure NTP in a class-based cluster, use the configuration variable NTP_SERVERS.
In ~/.config/tanzu/tkg/providers/infrastructure-vsphere/ytt/ directory, create a new .yaml file or augment an existing overlay file with the following code, changing the example time.google.com to the desired NTP server:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
spec:
kubeadmConfigSpec:
#@overlay/match missing_ok=True
ntp:
#@overlay/match missing_ok=True
enabled: true
#@overlay/match missing_ok=True
servers:
- time.google.com
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"}),expects="1+"
---
spec:
template:
spec:
#@overlay/match missing_ok=True
ntp:
#@overlay/match missing_ok=True
enabled: true
#@overlay/match missing_ok=True
servers:
- time.google.com
This overlay assigns persistent labels to cluster nodes when the legacy cluster is created. This is useful because labels applied manually via kubectl do not persist through node replacement.
See Extra Variables in ytt Overlays.
To configure custom node labels for control plane nodes in a class-based cluster, use the configuration variable CONTROL_PLANE_NODE_LABELS.
In ~/.config/tanzu/tkg/providers/infrastructure-vsphere/ytt/ directory, create a new .yaml file or augment an existing overlay file with the following code.
For control plane node labels, configure both initConfiguration and joinConfiguration sections so that labels are applied to the first node created and all nodes that join after:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
spec:
kubeadmConfigSpec:
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
#@overlay/match missing_ok=True
node-labels: NODE-LABELS
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
#@overlay/match missing_ok=True
node-labels: NODE-LABELS
Where NODE-LABELS is a comma-separated list of label key/value strings that includes node-type=control-plane, for example "examplekey1=labelvalue1,examplekey2=labelvalue2".
For worker node labels:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"}),expects="1+"
---
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
#@overlay/match missing_ok=True
node-labels: NODE-LABELS
Where NODE-LABELS is a comma-separated list of label key/value strings that includes node-type=worker, for example "examplekey1=labelvalue1,examplekey2=labelvalue2".
For an example overlay that deactivates the Bastion host for workload clusters on AWS, see Deactivate Bastion Server on AWS in the TKG Lab repository.
nginxThis example adds and configures a new workload cluster plan nginx that runs an nginx server. It uses the Cluster Resource Set (CRS) to deploy the nginx server to vSphere clusters created with the vSphere Cluster API provider version v0.7.6.
In .tkg/providers/infrastructure-vsphere/v0.7.6/, add a new file cluster-template-definition-nginx.yaml with contents identical to the cluster-template-definition-dev.yaml and cluster-template-definition-prod.yaml files:
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TemplateDefinition
spec:
paths:
- path: providers/infrastructure-vsphere/v0.7.6/ytt
- path: providers/infrastructure-vsphere/ytt
- path: providers/ytt
- path: bom
filemark: text-plain
- path: providers/config_default.yaml
The presence of this file creates a new plan, and the tanzu CLI parses its filename to create the option nginx to pass to tanzu cluster create --plan.
In ~/.config/tanzu/tkg/providers/ytt/04_user_customizations/, create a new file deploy_service.yaml containing:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@ load("@ytt:yaml", "yaml")
#@ def nginx_deployment():
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
#@ end
#@ if data.values.TKG_CLUSTER_ROLE == "workload" and data.values.CLUSTER_PLAN == "nginx":
---
apiVersion: addons.cluster.x-k8s.io/v1beta1
kind: ClusterResourceSet
metadata:
name: #@ "{}-nginx-deployment".format(data.values.CLUSTER_NAME)
labels:
cluster.x-k8s.io/cluster-name: #@ data.values.CLUSTER_NAME
spec:
strategy: "ApplyOnce"
clusterSelector:
matchLabels:
tkg.tanzu.vmware.com/cluster-name: #@ data.values.CLUSTER_NAME
resources:
- name: #@ "{}-nginx-deployment".format(data.values.CLUSTER_NAME)
kind: ConfigMap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: #@ "{}-nginx-deployment".format(data.values.CLUSTER_NAME)
type: addons.cluster.x-k8s.io/resource-set
stringData:
value: #@ yaml.encode(nginx_deployment())
#@ end
In this file, the conditional #@ if data.values.TKG_CLUSTER_ROLE == "workload" and data.values.CLUSTER_PLAN == "nginx": applies the overlay that follows to workload clusters with the plan nginx.
If the 04_user_customizations directory does not already exist under the top-level ytt directory, create it.
ytt OverlaysOverlays apply their configurations to all newly-created clusters. To customize clusters individually with different overlay settings, you can combine overlays with custom variables that you add to the cluster configuration.
This example shows how to use extra cluster configuration variables to set custom node labels for different clusters.
NoteAfter upgrading the Tanzu CLI, you need to re-apply these changes to your new
~/.config/tanzu/tkg/providersdirectory. The previous version will be renamed as a timestamped backup.
Adding a WORKER_NODE_LABELS variable to the default configuration and cluster configuration files enables new clusters to be created with different worker node labels.
Edit ~/.config/tanzu/tkg/providers/config_default.yaml and add the custom variable default at the bottom:
#! ---------------------------------------------------------------------
#! Custom variables
#! ---------------------------------------------------------------------
WORKER_NODE_LABELS: ""
Setting this default prevents unwanted labels from being added to a cluster if its configuration file lacks this variable.
Add a line near the end of ~/.config/tanzu/tkg/providers/ytt/lib/config_variable_association.star, above the final closing bracket, that associates the new variable with a provider type.
"WORKER_NODE_LABELS": ["vsphere", "aws", "azure"],
}
end
In ~/.config/tanzu/tkg/providers/infrastructure-vsphere/ytt/ directory, create a new .yaml file or augment an existing overlay file with the following code, which adds the WORKER_NODE_LABELS variable as a data value:
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"}),expects="1+"
---
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
#@overlay/match missing_ok=True
node-labels: #@ data.values.WORKER_NODE_LABELS
For any new workload cluster, you can now set WORKER_NODE_LABEL in its cluster configuration variable file, to apply its value as a label to every cluster node.
WORKER_NODE_LABELS: "workload-classification=production"
ytt overlays only apply to new plan-based workload clusters that you deploy using the Tanzu CLI logged in to a standalone management cluster. To modify the resources of a cluster that already exists, you need to modify them in the standalone management cluster and push them out to the workload cluster as described below.
This procedure gives existing clusters the same modification that the Configuring NTP without DHCP Option 42 (vSphere) overlay applies to new clusters.
Modifying the NTP settings on an existing cluster requires:
KubeadmConfigTemplate resource to reflect the new settingsMachineDeployment for the worker nodes to point to the new resourceKubeadmControlPlane resource to update the control plane nodes
To modify NTP settings on an existing cluster, run the following from the management cluster, and from the namespace containing the cluster to be modified, named cluster1 in this example:
Create a new KubeadmConfigTemplate resource and update the MachineDeployment for each KubeadmConfigTemplate / MachineDeployment pair. For a prod plan cluster, you do this three times:
Create the new KubeadmConfigTemplate resource for the worker nodes.
Find the existing KubeadmConfigTemplate and export it to a yaml file for editing.
kubectl get KubeadmConfigTemplate
kubectl get KubeadmConfigTemplate cluster1-md-0 -o yaml > cluster1-md-0.yaml
Edit the exported file by adding an ntp section underneath the existing spec.template.spec section, and appending -v1 to the name field under metadata, assuming this is the first update:
metadata:
...
name: cluster1-md-0-v1 # from cluster1-md-0
...
kubeadmConfigSpec:
ntp:
enabled: true
servers:
- time.google.com
Apply the updated yaml file to create the new resource.
kubectl apply -f cluster1-md-0.yaml
Update the MachineDeployment resource to point to the newly-created KubeadmConfigTemplate resource.
Find and edit the existing MachineDeployment resource for the cluster.
kubectl get MachineDeployment
kubectl edit MachineDeployment cluster1-md-0
Edit the spec.template.spec.bootstrap.configRef.name value the new name of the KubeadmConfigTemplate resource.
spec:
template:
...
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: cluster1-md-0-v1 # from cluster1-md-0
Save and exit the file, which triggers re-creation of the worker nodes.
Edit the KubeadmControlPlane resource for the control plane nodes to include the NTP servers.
Find and edit the existing KubeadmControlPlane resource for the cluster.
kubectl get KubeadmControlPlane
kubectl edit KubeadmControlPlane cluster1-control-plane
Edit the spec.kubeadmConfigSpec section by adding a new ntp section underneath. Save and exit the file.
kubeadmConfigSpec:
ntp:
enabled: true
servers:
- time.google.com