To create a cluster configuration file, you can copy an existing configuration file for a previous deployment to Azure and update it. Alternatively, you can create a file from scratch by using an empty template.
ImportantTanzu Kubernetes Grid v2.4.x is the last version of TKG that supports the creation of standalone TKG management clusters on Azure. The ability to create standalone TKG management clusters on Azure will be removed in the Tanzu Kubernetes Grid v2.5 release.
Starting from now, VMware recommends that you use Tanzu Mission Control to create native Azure AKS clusters instead of creating new TKG management clusters on Azure. For information about how to create native Azure AKS clusters with Tanzu Mission Control, see Managing the Lifecycle of Azure AKS Clusters in the Tanzu Mission Control documentation.
For more information, see Deprecation of TKG Management and Workload Clusters on AWS and Azure in the VMware Tanzu Kubernetes Grid v2.4 Release Notes.
The template below includes all of the options that are relevant to deploying management clusters on Azure. You can copy this template and use it to deploy management clusters to Azure.
Mandatory options are uncommented. Optional settings are commented out. Default values are included where applicable.
#! ---------------------------------------------------------------------
#! Basic cluster creation configuration
#! ---------------------------------------------------------------------
CLUSTER_NAME:
CLUSTER_PLAN: dev
INFRASTRUCTURE_PROVIDER: azure
# CLUSTER_API_SERVER_PORT:
ENABLE_CEIP_PARTICIPATION: true
ENABLE_AUDIT_LOGGING: true
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13
# CAPBK_BOOTSTRAP_TOKEN_TTL: 30m
#! ---------------------------------------------------------------------
#! Node configuration
#! ---------------------------------------------------------------------
# SIZE:
# CONTROLPLANE_SIZE:
# WORKER_SIZE:
# AZURE_CONTROL_PLANE_MACHINE_TYPE: "Standard_D2s_v3"
# AZURE_NODE_MACHINE_TYPE: "Standard_D2s_v3"
# OS_NAME: ""
# OS_VERSION: ""
# OS_ARCH: ""
# AZURE_CONTROL_PLANE_DATA_DISK_SIZE_GIB : ""
# AZURE_CONTROL_PLANE_OS_DISK_SIZE_GIB : ""
# AZURE_CONTROL_PLANE_MACHINE_TYPE : ""
# AZURE_CONTROL_PLANE_OS_DISK_STORAGE_ACCOUNT_TYPE : ""
# AZURE_ENABLE_NODE_DATA_DISK : ""
# AZURE_NODE_DATA_DISK_SIZE_GIB : ""
# AZURE_NODE_OS_DISK_SIZE_GIB : ""
# AZURE_NODE_MACHINE_TYPE : ""
# AZURE_NODE_OS_DISK_STORAGE_ACCOUNT_TYPE : ""
#! ---------------------------------------------------------------------
#! Azure configuration
#! ---------------------------------------------------------------------
AZURE_ENVIRONMENT: "AzurePublicCloud"
AZURE_TENANT_ID:
AZURE_SUBSCRIPTION_ID:
AZURE_CLIENT_ID:
AZURE_CLIENT_SECRET:
AZURE_LOCATION:
AZURE_SSH_PUBLIC_KEY_B64:
# AZURE_RESOURCE_GROUP: ""
# AZURE_VNET_RESOURCE_GROUP: ""
# AZURE_VNET_NAME: ""
# AZURE_VNET_CIDR: ""
# AZURE_CONTROL_PLANE_SUBNET_NAME: ""
# AZURE_CONTROL_PLANE_SUBNET_CIDR: ""
# AZURE_NODE_SUBNET_NAME: ""
# AZURE_NODE_SUBNET_CIDR: ""
# AZURE_CUSTOM_TAGS : ""
# AZURE_ENABLE_PRIVATE_CLUSTER : ""
# AZURE_FRONTEND_PRIVATE_IP : ""
# AZURE_ENABLE_ACCELERATED_NETWORKING : ""
#! ---------------------------------------------------------------------
#! Image repository configuration
#! ---------------------------------------------------------------------
# TKG_CUSTOM_IMAGE_REPOSITORY: ""
# TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: ""
#! ---------------------------------------------------------------------
#! Proxy configuration
#! ---------------------------------------------------------------------
# TKG_HTTP_PROXY: ""
# TKG_HTTPS_PROXY: ""
# TKG_NO_PROXY: ""
#! ---------------------------------------------------------------------
#! Machine Health Check configuration
#! ---------------------------------------------------------------------
ENABLE_MHC:
ENABLE_MHC_CONTROL_PLANE: true
ENABLE_MHC_WORKER_NODE: true
MHC_UNKNOWN_STATUS_TIMEOUT: 5m
MHC_FALSE_STATUS_TIMEOUT: 12m
#! ---------------------------------------------------------------------
#! Identity management configuration
#! ---------------------------------------------------------------------
IDENTITY_MANAGEMENT_TYPE: none
#! Settings for IDENTITY_MANAGEMENT_TYPE: "oidc"
# CERT_DURATION: 2160h
# CERT_RENEW_BEFORE: 360h
# OIDC_IDENTITY_PROVIDER_CLIENT_ID:
# OIDC_IDENTITY_PROVIDER_CLIENT_SECRET:
# OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: groups
# OIDC_IDENTITY_PROVIDER_ISSUER_URL:
# OIDC_IDENTITY_PROVIDER_SCOPES: "email,profile,groups,offline_access"
# OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: email
#! The following two variables are used to configure Pinniped JWTAuthenticator for workload clusters
# SUPERVISOR_ISSUER_URL:
# SUPERVISOR_ISSUER_CA_BUNDLE_DATA:
#! Settings for IDENTITY_MANAGEMENT_TYPE: "ldap"
# LDAP_BIND_DN:
# LDAP_BIND_PASSWORD:
# LDAP_HOST:
# LDAP_USER_SEARCH_BASE_DN:
# LDAP_USER_SEARCH_FILTER:
# LDAP_USER_SEARCH_ID_ATTRIBUTE: dn
# LDAP_USER_SEARCH_NAME_ATTRIBUTE:
# LDAP_GROUP_SEARCH_BASE_DN:
# LDAP_GROUP_SEARCH_FILTER:
# LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: dn
# LDAP_GROUP_SEARCH_USER_ATTRIBUTE: dn
# LDAP_ROOT_CA_DATA_B64:
#! ---------------------------------------------------------------------
#! Antrea CNI configuration
#! ---------------------------------------------------------------------
# ANTREA_NO_SNAT: true
# ANTREA_NODEPORTLOCAL: true
# ANTREA_NODEPORTLOCAL_ENABLED: true
# ANTREA_NODEPORTLOCAL_PORTRANGE: 61000-62000
# ANTREA_TRAFFIC_ENCAP_MODE: "encap"
# ANTREA_PROXY: true
# ANTREA_PROXY_ALL: false
# ANTREA_PROXY_LOAD_BALANCER_IPS: false
# ANTREA_PROXY_NODEPORT_ADDRS:
# ANTREA_PROXY_SKIP_SERVICES: ""
# ANTREA_POLICY: true
# ANTREA_TRACEFLOW: true
# ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD: false
# ANTREA_ENABLE_USAGE_REPORTING: false
# ANTREA_EGRESS: true
# ANTREA_EGRESS_EXCEPT_CIDRS: ""
# ANTREA_FLOWEXPORTER: false
# ANTREA_FLOWEXPORTER_COLLECTOR_ADDRESS: "flow-aggregator.flow-aggregator.svc:4739:tls"
# ANTREA_FLOWEXPORTER_POLL_INTERVAL: "5s"
# ANTREA_FLOWEXPORTER_ACTIVE_TIMEOUT: "5s"
# ANTREA_FLOWEXPORTER_IDLE_TIMEOUT: "15s"
# ANTREA_IPAM: false
# ANTREA_KUBE_APISERVER_OVERRIDE: ""
# ANTREA_MULTICAST: false
# ANTREA_MULTICAST_INTERFACES: ""
# ANTREA_NETWORKPOLICY_STATS: true
# ANTREA_SERVICE_EXTERNALIP: true
# ANTREA_TRANSPORT_INTERFACE: ""
# ANTREA_TRANSPORT_INTERFACE_CIDRS: ""
Specify information about your Azure account and the region in which you want to deploy the cluster.
For example:
AZURE_ENVIRONMENT: "AzurePublicCloud"
AZURE_TENANT_ID: b39138ca-[...]-d9dd62f0
AZURE_SUBSCRIPTION_ID: 3b511ccd-[...]-08a6d1a75d78
AZURE_CLIENT_ID: <encoded:M2ZkYTU4NGM[...]tZmViZjMxOGEyNmU1>
AZURE_CLIENT_SECRET: <encoded:bjVxLUpIUE[...]EN+d0RCd28wfg==>
AZURE_LOCATION: westeurope
AZURE_SSH_PUBLIC_KEY_B64: c3NoLXJzYSBBQUFBQjN[...]XJlLmNvbQ==
The Tanzu CLI creates the individual nodes of workload clusters according to settings that you provide in the configuration file. On AWS, you can configure all node VMs to have the same predefined configurations or set different predefined configurations for control plane and worker nodes. By using these settings, you can create workload clusters that have nodes with different configurations to the management cluster nodes. You can also create clusters in which the control plane nodes and worker nodes have different configurations.
When you created the management cluster, the instance types for the node machines are set in the AZURE_CONTROL_PLANE_MACHINE_TYPE
and AZURE_NODE_MACHINE_TYPE
options. By default, these settings are also used for workload clusters. The minimum configuration is 2 CPUs and 8 GB memory. The list of compatible instance types varies in different regions.
AZURE_CONTROL_PLANE_MACHINE_TYPE: "Standard_D2s_v3"
AZURE_NODE_MACHINE_TYPE: "Standard_D2s_v3"
You can override these settings by using the SIZE
, CONTROLPLANE_SIZE
and WORKER_SIZE
options. To create a workload cluster in which all of the control plane and worker node VMs are the same size, specify the SIZE
variable. If you set the SIZE
variable, all nodes will be created with the configuration that you set. Set to Standard_D2s_v3
, Standard_D4s_v3
, and so on. For information about node instances for Azure, see Sizes for virtual machines in Azure.
SIZE: Standard_D2s_v3
To create a workload cluster in which the control plane and worker node VMs are different sizes, specify the CONTROLPLANE_SIZE
and WORKER_SIZE
options.
CONTROLPLANE_SIZE: Standard_D2s_v3
WORKER_SIZE: Standard_D4s_v3
You can combine the CONTROLPLANE_SIZE
and WORKER_SIZE
options with the SIZE
option. For example, if you specify SIZE: "Standard_D2s_v3"
with WORKER_SIZE: "Standard_D4s_v3"
, the control plane nodes will be set to Standard_D2s_v3
and worker nodes will be set to Standard_D4s_v3
.
SIZE: Standard_D2s_v3
WORKER_SIZE: Standard_D4s_v3
To specify custom subnets (IP ranges) for the nodes in a cluster, set variables as follows before you create the cluster. You can define them as environment variables before you run tanzu cluster create
or include them in the cluster configuration file that you pass in with the --file
option.
To specify a custom subnet (IP range) for the control plane node in a cluster:
AZURE_CONTROL_PLANE_SUBNET_NAME
to the subnet name.AZURE_CONTROL_PLANE_SUBNET_NAME
to name for the new subnet, and optionally set AZURE_CONTROL_PLANE_SUBNET_CIDR
to a CIDR range within the configured Azure VNet.
AZURE_CONTROL_PLANE_SUBNET_CIDR
a CIDR is generated automatically.To specify a custom subnet for the worker nodes in a cluster, set environment variables AZURE_NODE_SUBNET_NAME
and AZURE_NODE_SUBNET_CIDR
following the same rules as for the control plane node, above.
For example:
AZURE_CONTROL_PLANE_SUBNET_CIDR: 10.0.0.0/24
AZURE_CONTROL_PLANE_SUBNET_NAME: my-cp-subnet
AZURE_NODE_SUBNET_CIDR: 10.0.1.0/24
AZURE_NODE_SUBNET_NAME: my-worker-subnet
AZURE_RESOURCE_GROUP: my-rg
AZURE_VNET_CIDR: 10.0.0.0/16
AZURE_VNET_NAME: my-vnet
AZURE_VNET_RESOURCE_GROUP: my-rg
After you have finished updating the management cluster configuration file, create the management cluster by following the instructions in Deploy Management Clusters from a Configuration File.
ImportantIf this is the first time that you are deploying a management cluster to Azure with a new version of Tanzu Kubernetes Grid, for example v2.4, make sure that you have accepted the base image license for that version. For information, see Accept the Base Image License in Prepare to Deploy Management Clusters to Microsoft Azure.