You can customize the Tanzu Kubernetes Grid Service with global settings for key features, including the container network interface (CNI), proxy server, and TLS certificates. Be aware of trade-offs and considerations when implementing global versus per-cluster functionality.

TkgServiceConfiguration v1alpha2 Specification

The TkgServiceConfiguration specification provides fields for configuring the Tanzu Kubernetes Grid Service instance.
apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TkgServiceConfiguration
metadata:
  name: tkg-service-configuration-spec
spec:
  defaultCNI: string
  proxy:
    httpProxy: string
    httpsProxy: string
    noProxy: [string]
  trust:
    additionalTrustedCAs:
      - name: string
        data: string
  defaultNodeDrainTimeout: time
Caution: Configuring the Tanzu Kubernetes Grid Service is a global operation. Any change you make to the TkgServiceConfiguration specification applies to all Tanzu Kubernetes clusters provisioned by that service. If a rolling update is initiated, either manually or by upgrade, clusters are updated by the changed service specification.

Annotated TkgServiceConfiguration v1alpha2 Specification

The following YAML lists and described the configurable fields for each of the TkgServiceConfiguration specification parameters. For examples, see Examples for Configuring the Tanzu Kubernetes Grid Service v1alpha1 API.
apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TkgServiceConfiguration
metadata:
  name: tkg-service-configuration-spec
spec:
  #defaultCNI is the default CNI for all Tanzu Kubernetes 
  #clusters to use unless overridden on a per-cluster basis 
  #supported values are antrea, calico, antrea-nsx-routed
  #defaults to antrea
  defaultCNI: string
  #proxy configures a proxy server to be used inside all 
  #clusters provisioned by this TKGS instance
  #if implemented all fields are required
  #if omitted no proxy is configured 
  proxy:
    #httpProxy is the proxy URI for HTTP connections 
    #to endpionts outside the clusters
    #takes the form http://<user>:<pwd>@<ip>:<port>
    httpProxy: string
    #httpsProxy is the proxy URI for HTTPS connections 
    #to endpoints outside the clusters
    #takes the frorm http://<user>:<pwd>@<ip>:<port>
    httpsProxy: string
    #noProxy is the list of destination domain names, domains, 
    #IP addresses, and other network CIDRs to exclude from proxying
    #must include Supervisor Cluster Pod, Egress, Ingress CIDRs
    noProxy: [string]
  #trust configures additional trusted certificates 
  #for the clusters provisioned by this TKGS instance
  #if omitted no additional certificate is configured
  trust: 
    #additionalTrustedCAs are additional trusted certificates 
    #can be additional CAs or end certificates
    additionalTrustedCAs:
      #name is the name of the additional trusted certificate
      #must match the name used in the filename
      - name: string
        #data holds the contents of the additional trusted cert 
        #PEM Public Certificate data encoded as a base64 string
        data: string
  #defaultNodeDrainTimeout is the total amount of time the
  #controller spends draining a node; default is undefined
  #which is the value of 0, meaning the node is drained 
  #without any time limitations; note that `nodeDrainTimeout` 
  #is different from `kubectl drain --timeout`
  defaultNodeDrainTimeout: time

No Proxy Field Requirements

You get the required spec.proxy.noProxy CIDR values from the Workload Network on the Supervisor Cluster. You must not proxy the Pods, Ingress, and Egress CIDRs by including them in the noProxy field.

Keep in mind the following when you are configuring the noProxy field:
  • You do not need to include the Supervisor Cluster Services CIDR in the noProxy field. Tanzu Kubernetes clusters do not interact with such services.
  • The endpoints localhost and 127.0.0.1 are automatically not proxied. You do not need to add them to the noProxy field.
  • The Pod and Service CIDRs for Tanzu Kubernetes clusters are automatically not proxied. You do not need to add them to the noProxy field.

When To Use Global or Per-Cluster Configuration Options

The TkgServiceConfiguration is a global specification that impacts all Tanzu Kubernetes clusters provisioned by the Tanzu Kubernetes Grid Service instance.

Before editing the TkgServiceConfiguration specification, be aware of the per-cluster alternatives that might satisfy your use case instead of a global configuration.
Table 1. Global vs. Per-Cluster Configuration Options
Setting Global Option Per-Cluster Option
Default CNI Edit the TkgServiceConfiguration spec. See Examples for Configuring the Tanzu Kubernetes Grid Service v1alpha1 API. Specify the CNI in the cluster specification. For example, Antrea is the default CNI. To use Calico, specify it in the cluster YAML. See Examples for Provisioning Tanzu Kubernetes Clusters Using the Tanzu Kubernetes Grid Service v1alpha1 API
Proxy Server Edit the TkgServiceConfiguration spec. See Examples for Configuring the Tanzu Kubernetes Grid Service v1alpha1 API. Include the proxy server configuration parameters in the cluster spec. See Examples for Provisioning Tanzu Kubernetes Clusters Using the Tanzu Kubernetes Grid Service v1alpha1 API.
Trust Certificates Edit the TkgServiceConfiguration spec. There are two use cases: configuring an external container registry and certificate-based proxy configuration. See Examples for Configuring the Tanzu Kubernetes Grid Service v1alpha1 API Yes, you can include custom certificates on a per-cluster basis or override the globally-set trust settings in the cluster specification. See Examples for Provisioning Tanzu Kubernetes Clusters Using the Tanzu Kubernetes Grid Service v1alpha1 API.
Note: If a global proxy is configured on the TkgServiceConfiguration, that proxy information is propagated to the cluster manifest after the initial deployment of the cluster. The global proxy configuration is added to the cluster manifest only if there is no proxy configuration fields present when creating the cluster. In other words, per-cluster configuration takes precedence and will overwrite a global proxy configuration. For more information, see Configuration Parameters for the Tanzu Kubernetes Grid Service v1alpha1 API.

Before editing the TkgServiceConfiguration specification, be aware of the ramifications of applying the setting at the global level.

Field Applied Impact on Existing Clusters If Added/Changed Per-Cluster Overriding on Cluster Creation Per-Cluster Overriding on Cluster Update
defaultCNI Globally None Yes, you can override the global setting on cluster creation No, you cannot change the CNI for an existing cluster; if you used the globally set default CNI on cluster creation, it cannot be changed
proxy Globally None Yes, you can override the global setting on cluster creation Yes, with U2+, you can override the global setting on cluster update
trust Globally None Yes, you can override the global setting on cluster creation Yes, with U2+, you can override the global setting on cluster update

Propagating Global Configuration Changes to Existing Clusters

Settings made at the global level in the TkgServiceConfiguration are not automatically propagated to existing clusters. For example, if you make a changes to either the proxy or the trust settings in the TkgServiceConfiguration, such changes will not affect clusters that are already provisioned.

To propagate a global change to an existing cluster, you must patch the Tanzu Kubernetes cluster to make the cluster inherit the changes made to the TkgServiceConfiguration.

For example:
kubectl patch tkc <CLUSTER_NAME> -n <NAMESPACE> --type merge -p "{\"spec\":{\"settings\":{\"network\":{\"proxy\": null}}}}"
kubectl patch tkc <CLUSTER_NAME> -n <NAMESPACE> --type merge -p "{\"spec\":{\"settings\":{\"network\":{\"trust\": null}}}}"