Azure Cluster Configuration Files

This topic explains how to use a flat configuration file or Kubernetes-style object spec to configure a Tanzu Kubernetes Grid (TKG) workload cluster before deploying it to Microsoft Azure with a standalone management cluster.

For general information about how configure workload clusters by using configuration files and object specs, see Configuration Files and Object Specs.

To use Azure-specific workload cluster features that require some configuration outside of the cluster’s configuration file or object spec, see Clusters on Azure.

Important

Tanzu Kubernetes Grid v2.4.x is the last version of TKG that supports the creation of TKG workload clusters on Azure. The ability to create TKG workload clusters on Azure will be removed in the Tanzu Kubernetes Grid v2.5 release.

Starting from now, VMware recommends that you use Tanzu Mission Control to create native Azure AKS clusters instead of creating new TKG workload clusters on Azure. For information about how to create native Azure AKS clusters with Tanzu Mission Control, see Managing the Lifecycle of Azure AKS Clusters in the Tanzu Mission Control documentation.

For more information, see Deprecation of TKG Management and Workload Clusters on AWS and Azure in the VMware Tanzu Kubernetes Grid v2.4 Release Notes.

Overview

To configure a workload cluster before deploying it to Azure, you create a cluster configuration file or a Kubernetes-style object spec file. When you pass either of these files to the -f option of tanzu cluster create, the Tanzu CLI uses the configuration information defined in the file to connect to your Azure account and create the resources that the cluster will use.

For the full list of options that you must specify when deploying workload clusters to Azure, see the Configuration File Variable Reference.

Create a Configuration File

To create a cluster configuration file, you can use the template in Workload Cluster Template below. After creating the configuration file, proceed to Create Workload Clusters.

Workload Cluster Template

The template below includes all of the options that are relevant to deploying workload clusters on Azure. You can copy this template and update it to deploy workload clusters to Azure.

Mandatory options are uncommented. Optional settings are commented out. Default values are included where applicable.

The way in which you configure the variables for workload clusters that are specific to Azure is identical for both management clusters and workload clusters. For information about how to configure the variables, see Deploy Management Clusters from a Configuration File and Management Cluster Configuration for Azure.

#! ---------------------------------------------------------------------
#! Cluster creation basic configuration
#! ---------------------------------------------------------------------

# CLUSTER_NAME:
CLUSTER_PLAN: dev
NAMESPACE: default
# CLUSTER_API_SERVER_PORT:
CNI: antrea

#! ---------------------------------------------------------------------
#! Node configuration
#! ---------------------------------------------------------------------

# SIZE:
# CONTROLPLANE_SIZE:
# WORKER_SIZE:
# AZURE_CONTROL_PLANE_MACHINE_TYPE: "Standard_D2s_v3"
# AZURE_NODE_MACHINE_TYPE: "Standard_D2s_v3"
# CONTROL_PLANE_MACHINE_COUNT: 1
# WORKER_MACHINE_COUNT: 1
# WORKER_MACHINE_COUNT_0:
# WORKER_MACHINE_COUNT_1:
# WORKER_MACHINE_COUNT_2:
# AZURE_CONTROL_PLANE_OS_DISK_SIZE_GIB: 128
# AZURE_CONTROL_PLANE_OS_DISK_STORAGE_ACCOUNT_TYPE: Premium_LRS
# AZURE_NODE_OS_DISK_SIZE_GIB: 128
# AZURE_NODE_OS_DISK_STORAGE_ACCOUNT_TYPE: Premium_LRS
# AZURE_CONTROL_PLANE_DATA_DISK_SIZE_GIB: 256
# AZURE_ENABLE_NODE_DATA_DISK: false
# AZURE_NODE_DATA_DISK_SIZE_GIB: 256

#! ---------------------------------------------------------------------
#! Azure Configuration
#! ---------------------------------------------------------------------

AZURE_ENVIRONMENT: "AzurePublicCloud"
AZURE_TENANT_ID:
AZURE_SUBSCRIPTION_ID:
AZURE_CLIENT_ID:
AZURE_CLIENT_SECRET:
AZURE_LOCATION:
AZURE_SSH_PUBLIC_KEY_B64:
# AZURE_ENABLE_ACCELERATED_NETWORKING: true
# AZURE_RESOURCE_GROUP: ""
# AZURE_VNET_RESOURCE_GROUP: ""
# AZURE_VNET_NAME: ""
# AZURE_VNET_CIDR: "10.0.0.0/16"
# AZURE_CONTROL_PLANE_SUBNET_NAME: ""
# AZURE_CONTROL_PLANE_SUBNET_CIDR: "10.0.0.0/24"
# AZURE_CONTROL_PLANE_SUBNET_SECURITY_GROUP: ""
# AZURE_NODE_SUBNET_NAME: ""
# AZURE_NODE_SUBNET_CIDR: "10.0.1.0/24"
# AZURE_NODE_SUBNET_SECURITY_GROUP: ""
# AZURE_NODE_AZ: ""
# AZURE_NODE_AZ_1: ""
# AZURE_NODE_AZ_2: ""
# AZURE_CUSTOM_TAGS:
# AZURE_ENABLE_PRIVATE_CLUSTER: false
# AZURE_FRONTEND_PRIVATE_IP: "10.0.0.100"
# AZURE_ENABLE_CONTROL_PLANE_OUTBOUND_LB: false
# AZURE_ENABLE_NODE_OUTBOUND_LB: false
# AZURE_CONTROL_PLANE_OUTBOUND_LB_FRONTEND_IP_COUNT: 1
# AZURE_NODE_OUTBOUND_LB_FRONTEND_IP_COUNT: 1
# AZURE_NODE_OUTBOUND_LB_IDLE_TIMEOUT_IN_MINUTES: 4
# AZURE_IMAGE_ID:
# AZURE_IMAGE_RESOURCE_GROUP:
# AZURE_IMAGE_NAME:
# AZURE_IMAGE_SUBSCRIPTION_ID:
# AZURE_IMAGE_GALLERY:
# AZURE_IMAGE_PUBLISHER:
# AZURE_IMAGE_OFFER:
# AZURE_IMAGE_SKU:
# AZURE_IMAGE_THIRD_PARTY:
# AZURE_IMAGE_VERSION:
# AZURE_IDENTITY_NAME:
# AZURE_IDENTITY_NAMESPACE:

#! ---------------------------------------------------------------------
#! Common configuration
#! ---------------------------------------------------------------------

# TKG_CUSTOM_IMAGE_REPOSITORY: ""
# TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: false
# TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: ""

# TKG_HTTP_PROXY: ""
# TKG_HTTPS_PROXY: ""
# TKG_NO_PROXY: ""
# TKG_PROXY_CA_CERT: ""

ENABLE_AUDIT_LOGGING: false
ENABLE_DEFAULT_STORAGE_CLASS: true

CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13

# OS_NAME: ""
# OS_VERSION: ""
# OS_ARCH: ""

#! ---------------------------------------------------------------------
#! Autoscaler configuration
#! ---------------------------------------------------------------------

ENABLE_AUTOSCALER: false
# AUTOSCALER_MAX_NODES_TOTAL: "0"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_ADD: "10m"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_DELETE: "10s"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_FAILURE: "3m"
# AUTOSCALER_SCALE_DOWN_UNNEEDED_TIME: "10m"
# AUTOSCALER_MAX_NODE_PROVISION_TIME: "15m"
# AUTOSCALER_MIN_SIZE_0:
# AUTOSCALER_MAX_SIZE_0:
# AUTOSCALER_MIN_SIZE_1:
# AUTOSCALER_MAX_SIZE_1:
# AUTOSCALER_MIN_SIZE_2:
# AUTOSCALER_MAX_SIZE_2:

#! ---------------------------------------------------------------------
#! Antrea CNI configuration
#! ---------------------------------------------------------------------

# ANTREA_NO_SNAT: false
# ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD: false
# ANTREA_TRAFFIC_ENCAP_MODE: "encap"
# ANTREA_EGRESS_EXCEPT_CIDRS: ""
# ANTREA_NODEPORTLOCAL_ENABLED: true
# ANTREA_NODEPORTLOCAL_PORTRANGE: 61000-62000
# ANTREA_PROXY: true
# ANTREA_PROXY_ALL: false
# ANTREA_PROXY_NODEPORT_ADDRS: ""
# ANTREA_PROXY_SKIP_SERVICES: ""
# ANTREA_PROXY_LOAD_BALANCER_IPS: false
# ANTREA_FLOWEXPORTER_COLLECTOR_ADDRESS: "flow-aggregator.flow-aggregator.svc:4739:tls"
# ANTREA_FLOWEXPORTER_POLL_INTERVAL: "5s"
# ANTREA_FLOWEXPORTER_ACTIVE_TIMEOUT: "30s"
# ANTREA_FLOWEXPORTER_IDLE_TIMEOUT: "15s"
# ANTREA_KUBE_APISERVER_OVERRIDE:
# ANTREA_TRANSPORT_INTERFACE:
# ANTREA_TRANSPORT_INTERFACE_CIDRS: ""
# ANTREA_MULTICAST_INTERFACES: ""
# ANTREA_MULTICAST_IGMPQUERY_INTERVAL: "125s"
# ANTREA_TUNNEL_TYPE: geneve
# ANTREA_ENABLE_USAGE_REPORTING: false
# ANTREA_ENABLE_BRIDGING_MODE: false
# ANTREA_DISABLE_TXCHECKSUM_OFFLOAD: false
# ANTREA_DNS_SERVER_OVERRIDE: ""
# ANTREA_MULTICLUSTER_ENABLE: false
# ANTREA_MULTICLUSTER_NAMESPACE: ""

Creating Azure NSGs for Existing VNet

Tanzu Kubernetes Grid management and workload clusters on Azure require two Network Security Groups (NSGs) to be defined on the cluster’s VNet and in its VNet resource group:

  • An NSG named CLUSTER-NAME-controlplane-nsg and associated with the cluster’s control plane subnet
  • An NSG named CLUSTER-NAME-node-nsg and associated with the cluster’s worker node subnet

    Where CLUSTER-NAME is the name of the cluster.

If you specify an existing VNet for a workload cluster, you must create these NSGs in Azure. An existing VNet for a workload cluster is specified with AZURE_VNET_NAME in its configuration file.

If you do not specify an existing VNet for the cluster, the deployment process creates a new VNet and the required NSGs.

See the Microsoft Azure table in the Configuration File Variable Reference for how to configure the cluster’s VNet, resource groups, and subnets.

Create an Object Spec File

You can use the Tanzu CLI to convert a cluster configuration file into a Kubernetes-style object spec file for a class-based workload cluster without deploying the cluster:

  • To generate an object spec file for every class-based cluster that you create with tanzu cluster create, ensure that the auto-apply-generated-clusterclass-based-configuration feature is set to false in the configuration of the Tanzu CLI. This feature is set to false by default. When auto-apply-generated-clusterclass-based-configuration is set to false and you run tanzu cluster create with the --file flag, the command converts your cluster configuration file into an object spec file and exits without creating the cluster. After reviewing the configuration, you re-run tanzu cluster create with the object spec file generated by the Tanzu CLI. If you have updated the default configuration, to set it back to false, run:

    tanzu config set features.cluster.auto-apply-generated-clusterclass-based-configuration false
    
  • To generate an object spec file for a single cluster, pass the --dry-run option to tanzu cluster create and save the output to a file. Use the same options and configuration --file that you would use if you were creating the cluster, for example:

    tanzu cluster create my-cluster --file my-cluster-config.yaml --dry-run > my-cluster-spec.yaml
    

    You can then use this object spec to deploy a cluster as described in Create a Class-Based Cluster from the Object Spec.

What to Do Next

Proceed to Create Workload Clusters.

check-circle-line exclamation-circle-line close-line
Scroll to top icon