This topic explains how you can customize the dev and prod plans and create new custom plans for Tanzu Kubernetes clusters on each cloud infrastructure. It also explains where the plan configuration values come from, and what is the order of precedence for their multiple sources.

Where Cluster Configuration Values Come From

When the tkg CLI creates a cluster, it combines configuration values from the following:

  • Live input at invocation
    • CLI input
    • UI input, when deploying a management cluster with the installer
  • Environment variables
  • ~/.tkg/config.yaml or other file passed to the CLI --config option
  • Cluster plan YAML configuration files in ~/.tkg/providers, as described in Plan Configuration Files below.
  • Other, non-plan YAML configuration files under ~/.tkg/providers

Live input applies configuration values that are unique to each invocation, environment variables persist them over a terminal session, and configuration files and overlays persist them indefinitely. You can customize clusters through any of these sources, with recommendations and caveats described below.

See Configuration Precedence Order for how the tkg CLI derives specific cluster configuration values from these multiple sources where they may conflict.

Plan Configuration Files

The ~/.tkg/providers directory contains workload cluster plan configuration files in the following subdirectories, based on the cloud infrastructure that deploys the clusters:

Clusters deployed by... ~/.tkg/providers Directory
Management cluster on vSphere /infrastructure-vsphere
vSphere 7 Supervisor cluster /infrastructure-tkg-service-vsphere
Management cluster on Amazon EC2 /infrastructure-aws
Management cluster on Azure /infrastructure-azure

These plan configuration files are named cluster-template-definition-PLAN.yaml. The configuration values for each plan come from these files and from the files that they list under spec.paths:

  • Config files that ship with the tkg CLI
  • Custom files that users create and add to the spec.paths list
  • ytt Overlays that users create or edit to overwrite values in other configuration files

Files to Edit, Files to Leave Alone

To customize cluster plans via YAML, you edit files under ~/.tkg/providers/, but you should avoid changing other files.

Files to Edit

Workload cluster plan configuration file paths follow the form ~/.tkg/providers/infrastructure-INFRASTRUCTURE/VERSION/cluster-template-definition-PLAN.yaml, where:

  • INFRASTRUCTURE is vsphere, aws, or azure.
  • VERSION is the version of the Cluster API Provider module that the configuration uses.
  • PLAN is dev, prod, or a custom plan as created in the New nginx Workload Plan example.

Each plan configuration file has a spec.paths section that lists source files and ytt directories that configure the cluster plan. For example:

spec:
  paths:
    - path: providers/infrastructure-aws/v0.5.5/ytt
    - path: providers/infrastructure-aws/ytt
    - path: providers/ytt
    - path: bom
      filemark: text-plain
    - path: providers/config_default.yaml

These files are processed in the order listed. If the same configuration field is set in multiple files, the last-processed setting is the one that the tkg CLI uses.

To customize your cluster configuration, you can:

  • Create new configuration files and add them to the spec.paths list.
    • This is the easier method.
  • Modify existing ytt overlay files as described in ytt Overlays below.
    • This is the more powerful method, for people who are comfortable with ytt.

Files to Leave Alone

VMware discourages changing the following files under ~/.tkg/providers, except as directed:

  • base-template.yaml files, in ytt directories

    • These configuration files use values from the Cluster API provider repos for vSphere, AWS, and Azure under Kubernetes SIGs, and other upstream, open-source projects, and they are best kept intact.
    • Instead, create new configuration files or see ytt Overlays below to set values in the overlay.yaml file in the same ytt directory.
  • ~/.tkg/providers/config_default.yaml - Append only

    • This file contains system-wide defaults for Tanzu Kubernetes Grid on all cloud infrastructures.
    • Do not modify existing values in this file, but you can append a User Customizations section at the end, as in the Privileged Containers for Workloads example below.
    • Instead of changing existing values, modify the top-level ~/.tkg/config.yaml or create other file versions to pass to --config.
  • ~/.tkg/providers/config.yaml

    • The tkg CLI uses this file as a reference for all providers present in the /providers directory, and their default versions.

ytt Overlays

Tanzu Kubernetes Grid supports customizing configurations by adding or modifying configuration files directly, or by using ytt overlays.

Directly customizing configuration files is simpler, but if you are comfortable with ytt overlays, they let you customize configurations at different scopes and manage multiple, modular configuration files, without destructively editing upstream and inherited configuration values.

The ~/.tkg/providers/ directory includes ytt directories and overlay.yaml files at different levels, to scope configuration settings at each:

  • Provider- and version-specific ytt directories
    • Specific configurations for provider API version.
    • The base_template.yaml file contains all-caps placeholders such as "${CLUSTER_NAME}", and should not be edited.
    • The overlay.yaml file is tailored to overlay values into base_template.yaml.
  • Provider-wide ytt directories
    • Provider-wide configurations that apply to all versions
  • Top-level ytt directory
    • Cross-provider configurations
    • Organized into numbered directories, and processed in number order
    • ytt transverses directories in alphabetical order and overwrites duplicate settings, so you can create a /04_user_customizations subdirectory for configurations that take precedence over any in lower-numbered ytt subdirectories. The Privileged Containers for Workloads and New nginx Workload Plan examples below use this directory.

Configuration Precedence Order

When the tkg CLI creates a cluster, it reads in configuration values from multiple sources that may conflict. It resolves conflicts by using values in the following order of descending precedence:

Processing layers, ordered by descending precedence Source Examples Notes
9. TKG CLI Values Supplied by User

tkg create cluster myclustername ... --control-plane-count 5

On-the-fly values supplied as command-line arguments.

8. User-specific Data Values, from or written to top-level config file AZURE_NODE_MACHINE_TYPE: Standard_D2s_v3

The main source of workload (and management) cluster parameters is ~/.tkg/config.yaml or other file passed to CLI --config option. The installer interface writes values into this file.

7. Factory default Data Values Shipped with TKG config_default.yaml These are the supported cluster template configuration "knobs", with documentation and their default settings where applicable.
6. BOM metadata Data Values bom-1.2.0+vmware.1.yaml One per Kubernetes version released by TKG
5U. User-provided Customizations Customizable ytt myhacks.yaml Topmost layer of ytt processing files before the Data Values layers; takes precedence over the layers below it
5. Additional Processing YAMLs rm-bastion.yaml, rm-mhc.yaml, custom-resource-annotations.yaml
4. Add-on YAMLs and overlays calico.yaml, antrea.yaml A specific class of customization representing one of more resources to be applied to the cluster post-creation.
3. Plan-specific processing YAMLs prod.yaml, dev.yaml Plan-specific customizations.
2. Overlay YAML ytt/overlay.yaml Defines what in the basic template is overridable, using legacy, "KEY_NAME:value" style entries.
1. Base Cluster template YAML ytt/base-template.yaml Base CAPI template with actual default values and no ytt annotations.

Plan Configuration Examples

The following examples give use cases for customizing existing workload cluster plans and creating a new plan.

Control Plane HTTP Proxy and Nameserver on vSphere

This example customizes the HTTP Proxy and nameserver for workload cluster control plane nodes on vSphere only. It adds an HTTP_PROXY value to the containerd configuration using the Kubeadm control plane type.

In ~/.tkg/providers/infrastructure-vsphere/ytt/, create a new file proxy_nameserver.yaml containing:

#@ load("@ytt:overlay", "overlay")

#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
spec:
  kubeadmConfigSpec:
    preKubeadmCommands:
    #! Add HTTP_PROXY to containerd configuration file
    #@overlay/append
    - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
    #! Add nameserver to all k8s nodes
    #@overlay/append
    - echo 'nameserver 10.107.192.101' >> /usr/lib/systemd/resolv.conf

Adding this file into the infrastructure-vsphere/ytt/ directory applies it to all bundled versions of the vSphere Cluster API provider, and therefore all vSphere clusters created by tkg create cluster. The file needs no explicit reference; it is parsed with the other files in its directory.

Privileged Containers for Workloads

This example lets you configure the --allow-privileged option for a workload cluster's kube-apiserver by setting the variable ALLOW_PRIVILEGED_CONTAINERS in a config file or local environment.

  1. Modify ~/.tkg/providers/config_default.yaml by adding a new User customizations section at the end of the file, after the Additional Internal Config Values section:

    #! User customizations
    ALLOW_PRIVILEGED_CONTAINERS: false
    
  2. In ~/.tkg/providers/ytt/04_user_customizations/, create a new file allow_privileged_container.yaml containing:

    #@ load("@ytt:overlay", "overlay")
    #@ load("@ytt:data", "data")
    
    #@ if data.values.TKG_CLUSTER_ROLE == "workload" and data.values.ALLOW_PRIVILEGED_CONTAINERS:
    
    #@overlay/match missing_ok=True,by=overlay.subset({"kind":"KubeadmControlPlane"})
    ---
    spec:
      kubeadmConfigSpec:
        clusterConfiguration:
          apiServer:
            extraArgs:
              #@overlay/match missing_ok=True
              allow-privileged: true
    
    #@ end
    

    If the 04_user_customizations directory does not already exist under the top-level ytt directory, create it.

Custom Certificate Authority (vSphere)

This example modifies a cluster configuration overlay file to customize the certificate authority (CA) used by all nodes in clusters created by a Tanzu Kubernetes Grid management cluster on vSphere.

The code works by including and adding a CA within preKubeadmCommands blocks, which specify extra commands that run before kubeadm runs.

  1. Choose whether to apply the custom CA to all new vSphere clusters, or just the ones created with specific Cluster API provider versions, such as Cluster API Provider vSphere v0.7.1.

  2. In your ~/.tkg/providers/ directory, find the ytt/ overlay files that cover your chosen scope. For example, infrastructure-vsphere/ytt/vsphere-overlay.yaml applies to all Cluster API Provider vSphere versions.

  3. Using the adjacent base-template.yaml file as a guide, edit each overlay file to append the custom certificate to the preKubeadmCommands definition blocks.

    • Control plane nodes are configured in the apiVersion: controlplane.cluster.x-k8s.io/v1alpha3 / kind: KubeadmControlPlane section.
    • Worker nodes are configured in the apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3 / kind: KubeadmConfigTemplate section.
    • To see a complete overlay.yaml file, search the VMware {code} Sample Exchange for TKG Custom CA Example.
    #@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
    ---
    apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
    kind: KubeadmControlPlane
    ...
      preKubeadmCommands:
      #! Adding your custom CA certificate
      #@overlay/append
      - |
        cat <<EOF > /etc/ssl/certs/myca.pem
        -----BEGIN CERTIFICATE-----
        MIIFGDCCAwACCQCN5wYlscWbuTANBgkqhkiG9w0BAQsFADBOMQswCQYDVQQGEwJV
        ...
        6yWIMDY6InMD3EAG
        -----END CERTIFICATE-----
        EOF
      #@overlay/append
      - openssl x509 -in /etc/ssl/certs/myca.pem -text  -fingerprint >>/etc/pki/tls/certs/ca-bundle.crt
    

New Plan nginx

This example adds and configures a new workload cluster plan nginx that runs an nginx server. It uses the Cluster Resource Set (CRS) to deploy the nginx server to vSphere clusters created with the vSphere Cluster API provider version v0.7.1.

  1. In .tkg/providers/infrastructure-vsphere/v0.7.1/, add a new file cluster-template-definition-nginx.yaml with contents identical to the cluster-template-definition-dev.yaml and cluster-template-definition-prod.yaml files:

    apiVersion: run.tanzu.vmware.com/v1alpha1
    kind: TemplateDefinition
    spec:
      paths:
        - path: providers/infrastructure-vsphere/v0.7.1/ytt
        - path: providers/infrastructure-vsphere/ytt
        - path: providers/ytt
        - path: bom
          filemark: text-plain
        - path: providers/config_default.yaml
    

    The presence of this file creates a new plan, and the tkg CLI parses its filename to create the option nginx to pass to tkg create cluster --plan.

  2. In ~/.tkg/providers/ytt/04_user_customizations/, create a new file deploy_service.yaml containing:

    #@ load("@ytt:overlay", "overlay")
    #@ load("@ytt:data", "data")
    #@ load("@ytt:yaml", "yaml")
    
    #@ def nginx_deployment():
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      selector:
        matchLabels:
          app: nginx
      replicas: 2
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.14.2
            ports:
            - containerPort: 80
    #@ end
    
    #@ if data.values.TKG_CLUSTER_ROLE == "workload" and data.values.CLUSTER_PLAN == "nginx":
    
    ---
    apiVersion: addons.cluster.x-k8s.io/v1alpha3
    kind: ClusterResourceSet
    metadata:
      name: #@ "{}-nginx-deployment".format(data.values.CLUSTER_NAME)
      labels:
        cluster.x-k8s.io/cluster-name: #@ data.values.CLUSTER_NAME
    spec:
      strategy: "ApplyOnce"
      clusterSelector:
        matchLabels:
          tkg.tanzu.vmware.com/cluster-name: #@ data.values.CLUSTER_NAME
      resources:
      - name: #@ "{}-nginx-deployment".format(data.values.CLUSTER_NAME)
        kind: ConfigMap
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: #@ "{}-nginx-deployment".format(data.values.CLUSTER_NAME)
    type: addons.cluster.x-k8s.io/resource-set
    stringData:
      value: #@ yaml.encode(nginx_deployment())
    
    #@ end
    

    In this file, the conditional #@ if data.values.TKG_CLUSTER_ROLE == "workload" and data.values.CLUSTER_PLAN == "nginx": applies the overlay that follows to workload clusters with the plan nginx.

    If the 04_user_customizations directory does not already exist under the top-level ytt directory, create it.

check-circle-line exclamation-circle-line close-line
Scroll to top icon