Management Cluster Configuration for vSphere

To create a cluster configuration file, you can copy an existing configuration file for a previous deployment to vSphere and update it. Alternatively, you can create a file from scratch by using an empty template.

Management Cluster Configuration Template

The template below includes all of the options that are relevant to deploying management clusters on vSphere. You can copy this template and use it to deploy management clusters to vSphere.

Mandatory options are uncommented. Optional settings are commented out. Default values are included where applicable.

#! ---------------------------------------------------------------------
#! Basic cluster creation configuration
#! ---------------------------------------------------------------------

CLUSTER_NAME:
CLUSTER_PLAN: dev
INFRASTRUCTURE_PROVIDER: vsphere
# CLUSTER_API_SERVER_PORT: # For deployments without NSX Advanced Load Balancer
ENABLE_CEIP_PARTICIPATION: true
ENABLE_AUDIT_LOGGING: true
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13
# CAPBK_BOOTSTRAP_TOKEN_TTL: 30m

#! ---------------------------------------------------------------------
#! vSphere configuration
#! ---------------------------------------------------------------------

VSPHERE_SERVER:
VSPHERE_USERNAME:
VSPHERE_PASSWORD:
VSPHERE_DATACENTER:
VSPHERE_RESOURCE_POOL:
VSPHERE_DATASTORE:
VSPHERE_FOLDER:
VSPHERE_NETWORK: VM Network
# VSPHERE_CONTROL_PLANE_ENDPOINT: # Required for Kube-Vip
# VSPHERE_CONTROL_PLANE_ENDPOINT_PORT: 6443
VIP_NETWORK_INTERFACE: "eth0"
# VSPHERE_TEMPLATE:
VSPHERE_SSH_AUTHORIZED_KEY:
# VSPHERE_STORAGE_POLICY_ID: ""
VSPHERE_TLS_THUMBPRINT:
VSPHERE_INSECURE: false
DEPLOY_TKG_ON_VSPHERE7: false
ENABLE_TKGS_ON_VSPHERE7: false

#! ---------------------------------------------------------------------
#! Node configuration
#! ---------------------------------------------------------------------

# SIZE:
# CONTROLPLANE_SIZE:
# WORKER_SIZE:
# OS_NAME: ""
# OS_VERSION: ""
# OS_ARCH: ""
# VSPHERE_NUM_CPUS: 2
# VSPHERE_DISK_GIB: 40
# VSPHERE_MEM_MIB: 4096
# VSPHERE_MTU:
# VSPHERE_CONTROL_PLANE_NUM_CPUS: 2
# VSPHERE_CONTROL_PLANE_DISK_GIB: 40
# VSPHERE_CONTROL_PLANE_MEM_MIB: 8192
# VSPHERE_WORKER_NUM_CPUS: 2
# VSPHERE_WORKER_DISK_GIB: 40
# VSPHERE_WORKER_MEM_MIB: 4096

#! ---------------------------------------------------------------------
#! VMware NSX specific configuration for enabling NSX routable pods
#! ---------------------------------------------------------------------

# NSXT_POD_ROUTING_ENABLED: false
# NSXT_ROUTER_PATH: ""
# NSXT_USERNAME: ""
# NSXT_PASSWORD: ""
# NSXT_MANAGER_HOST: ""
# NSXT_ALLOW_UNVERIFIED_SSL: false
# NSXT_REMOTE_AUTH: false
# NSXT_VMC_ACCESS_TOKEN: ""
# NSXT_VMC_AUTH_HOST: ""
# NSXT_CLIENT_CERT_KEY_DATA: ""
# NSXT_CLIENT_CERT_DATA: ""
# NSXT_ROOT_CA_DATA: ""
# NSXT_SECRET_NAME: "cloud-provider-vsphere-nsxt-credentials"
# NSXT_SECRET_NAMESPACE: "kube-system"

#! ---------------------------------------------------------------------
#! NSX Advanced Load Balancer configuration
#! ---------------------------------------------------------------------

AVI_ENABLE: false
AVI_CONTROL_PLANE_HA_PROVIDER: false
# AVI_NAMESPACE: "tkg-system-networking"
# AVI_DISABLE_INGRESS_CLASS: true
# AVI_AKO_IMAGE_PULL_POLICY: IfNotPresent
# AVI_ADMIN_CREDENTIAL_NAME: avi-controller-credentials
# AVI_CA_NAME: avi-controller-ca
# AVI_CONTROLLER:
# AVI_USERNAME: ""
# AVI_PASSWORD: ""
# AVI_CLOUD_NAME:
# AVI_SERVICE_ENGINE_GROUP:
# AVI_NSXT_T1LR: # Required for NSX ALB deployments on NSX Cloud.
# AVI_MANAGEMENT_CLUSTER_SERVICE_ENGINE_GROUP:
# AVI_DATA_NETWORK:
# AVI_DATA_NETWORK_CIDR:
# AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME:
# AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR:
# AVI_CA_DATA_B64: ""
# AVI_LABELS: ""
# AVI_DISABLE_STATIC_ROUTE_SYNC: true
# AVI_INGRESS_DEFAULT_INGRESS_CONTROLLER: false
# AVI_INGRESS_SHARD_VS_SIZE: ""
# AVI_INGRESS_SERVICE_TYPE: ""
# AVI_INGRESS_NODE_NETWORK_LIST: ""

#! ---------------------------------------------------------------------
#! Image repository configuration
#! ---------------------------------------------------------------------

# TKG_CUSTOM_IMAGE_REPOSITORY: ""
# TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: ""

#! ---------------------------------------------------------------------
#! Proxy configuration
#! ---------------------------------------------------------------------

# TKG_HTTP_PROXY: ""
# TKG_HTTPS_PROXY: ""
# TKG_NO_PROXY: ""

#! ---------------------------------------------------------------------
#! Machine Health Check configuration
#! ---------------------------------------------------------------------

ENABLE_MHC:
ENABLE_MHC_CONTROL_PLANE: true
ENABLE_MHC_WORKER_NODE: true
MHC_UNKNOWN_STATUS_TIMEOUT: 5m
MHC_FALSE_STATUS_TIMEOUT: 12m

#! ---------------------------------------------------------------------
#! Identity management configuration
#! ---------------------------------------------------------------------

IDENTITY_MANAGEMENT_TYPE: "none"

#! Settings for IDENTITY_MANAGEMENT_TYPE: "oidc"
# CERT_DURATION: 2160h
# CERT_RENEW_BEFORE: 360h
# OIDC_IDENTITY_PROVIDER_CLIENT_ID:
# OIDC_IDENTITY_PROVIDER_CLIENT_SECRET:
# OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: groups
# OIDC_IDENTITY_PROVIDER_ISSUER_URL:
# OIDC_IDENTITY_PROVIDER_SCOPES: "email,profile,groups,offline_access"
# OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: email

#! The following two variables are used to configure Pinniped JWTAuthenticator for workload clusters
# SUPERVISOR_ISSUER_URL:
# SUPERVISOR_ISSUER_CA_BUNDLE_DATA:

#! Settings for IDENTITY_MANAGEMENT_TYPE: "ldap"
# LDAP_BIND_DN:
# LDAP_BIND_PASSWORD:
# LDAP_HOST:
# LDAP_USER_SEARCH_BASE_DN:
# LDAP_USER_SEARCH_FILTER:
# LDAP_USER_SEARCH_ID_ATTRIBUTE: dn
# LDAP_USER_SEARCH_NAME_ATTRIBUTE:
# LDAP_GROUP_SEARCH_BASE_DN:
# LDAP_GROUP_SEARCH_FILTER:
# LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: dn
# LDAP_GROUP_SEARCH_USER_ATTRIBUTE: dn
# LDAP_ROOT_CA_DATA_B64:

#! ---------------------------------------------------------------------
#! Antrea CNI configuration
#! ---------------------------------------------------------------------

# ANTREA_NO_SNAT: true
# ANTREA_NODEPORTLOCAL: true
# ANTREA_NODEPORTLOCAL_ENABLED: true
# ANTREA_NODEPORTLOCAL_PORTRANGE: 61000-62000
# ANTREA_TRAFFIC_ENCAP_MODE: "encap"
# ANTREA_PROXY: true
# ANTREA_PROXY_ALL: false
# ANTREA_PROXY_LOAD_BALANCER_IPS: false
# ANTREA_PROXY_NODEPORT_ADDRS:
# ANTREA_PROXY_SKIP_SERVICES: ""
# ANTREA_POLICY: true
# ANTREA_TRACEFLOW: true
# ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD: false
# ANTREA_ENABLE_USAGE_REPORTING: false
# ANTREA_EGRESS: true
# ANTREA_EGRESS_EXCEPT_CIDRS: ""
# ANTREA_FLOWEXPORTER: false
# ANTREA_FLOWEXPORTER_COLLECTOR_ADDRESS: "flow-aggregator.flow-aggregator.svc:4739:tls"
# ANTREA_FLOWEXPORTER_POLL_INTERVAL: "5s"
# ANTREA_FLOWEXPORTER_ACTIVE_TIMEOUT: "5s"
# ANTREA_FLOWEXPORTER_IDLE_TIMEOUT: "15s"
# ANTREA_IPAM: false
# ANTREA_KUBE_APISERVER_OVERRIDE: ""
# ANTREA_MULTICAST: false
# ANTREA_MULTICAST_INTERFACES: ""
# ANTREA_NETWORKPOLICY_STATS: true
# ANTREA_SERVICE_EXTERNALIP: true
# ANTREA_TRANSPORT_INTERFACE: ""
# ANTREA_TRANSPORT_INTERFACE_CIDRS: ""

General vSphere Configuration

Provide information to allow Tanzu Kubernetes Grid to log in to vSphere, and to designate the resources for Tanzu Kubernetes Grid can use.

  • Update the VSPHERE_SERVER, VSPHERE_USERNAME, and VSPHERE_PASSWORD settings with the IP address or FQDN of the vCenter Server instance and the credentials to use to log in.
  • Provide the full paths to the vSphere datacenter, resource pool, datastores, and folder in which to deploy the management cluster:

    • VSPHERE_DATACENTER: /<MY-DATACENTER>
    • VSPHERE_RESOURCE_POOL: /<MY-DATACENTER>/host/<CLUSTER>/Resources
    • VSPHERE_DATASTORE: /<MY-DATACENTER>/datastore/<MY-DATASTORE>
    • VSPHERE_FOLDER: /<MY-DATACENTER>/vm/<FOLDER>.
  • Depending on the HA provider for the cluster’s control plane API, set VSPHERE_CONTROL_PLANE_ENDPOINT or leave it blank:
    • Kube-VIP: Set to a static virtual IP address, or to a fully qualified domain name (FQDN) mapped to the VIP address.
    • NSX Advanced Load Balancer: Leave blank, unless you need to specify an endpoint. If so, use a static address within the IPAM Profile’s VIP Network range that you have manually added to the Static IP pool.
  • Specify a network and a network interface in VSPHERE_NETWORK and VIP_NETWORK_INTERFACE.
  • Optionally uncomment and update VSPHERE_TEMPLATE to specify the path to an OVA file if you are using multiple custom OVA images for the same Kubernetes version. Use the format /MY-DC/vm/MY-FOLDER-PATH/MY-IMAGE. For more information, see Deploy a Cluster with a Custom OVA Image in Creating and Managing TKG 2.3 Workload Clusters with the Tanzu CLI.
  • Provide your SSH key in the VSPHERE_SSH_AUTHORIZED_KEY option. For information about how to obtain an SSH key, see Prepare to Deploy Management Clusters to vSphere.
  • Provide the TLS thumbprint in the VSPHERE_TLS_THUMBPRINT variable, or set VSPHERE_INSECURE: true to skip thumbprint verification.
  • Optionally uncomment VSPHERE_STORAGE_POLICY_ID and specify the name of a storage policy for the VMs, which you have configured on vCenter Server, for the management cluster to use.

For example:

#! ---------------------------------------------------------------------
#! vSphere configuration
#! ---------------------------------------------------------------------

VSPHERE_SERVER: 10.185.12.154
VSPHERE_USERNAME: [email protected]
VSPHERE_PASSWORD: <encoded:QWRtaW4hMjM=>
VSPHERE_DATACENTER: /dc0
VSPHERE_RESOURCE_POOL: /dc0/host/cluster0/Resources/tanzu
VSPHERE_DATASTORE: /dc0/datastore/sharedVmfs-1
VSPHERE_FOLDER: /dc0/vm/tanzu
VSPHERE_NETWORK: "VM Network"
VSPHERE_CONTROL_PLANE_ENDPOINT: 10.185.11.134
VIP_NETWORK_INTERFACE: "eth0"
VSPHERE_TEMPLATE: /dc0/vm/tanzu/my-image.ova
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAAB3[...]tyaw== [email protected]
VSPHERE_TLS_THUMBPRINT: 47:F5:83:8E:5D:36:[...]:72:5A:89:7D:29:E5:DA
VSPHERE_INSECURE: false
VSPHERE_STORAGE_POLICY_ID: "My storage policy"

Configure Node Sizes

The Tanzu CLI creates the individual nodes of management clusters and workload clusters according to settings that you provide in the configuration file. On vSphere, you can configure all node VMs to have the same predefined configurations, set different predefined configurations for control plane and worker nodes, or customize the configurations of the nodes. By using these settings, you can create clusters that have nodes with different configurations to the management cluster nodes. You can also create clusters in which the control plane nodes and worker nodes have different configurations.

Predefined Node Sizes

The Tanzu CLI provides the following predefined configurations for cluster nodes:

  • small: 2 CPUs, 4 GB memory, 20 GB disk
  • medium: 2 CPUs, 8 GB memory, 40 GB disk
  • large: 4 CPUs, 16 GB memory, 40 GB disk
  • extra-large: 8 CPUs, 32 GB memory, 80 GB disk

To create a cluster in which all of the control plane and worker node VMs are the same size, specify the SIZE variable. If you set the SIZE variable, all nodes will be created with the configuration that you set.

SIZE: "large"

To create a in which the control plane and worker node VMs are different sizes, specify the CONTROLPLANE_SIZE and WORKER_SIZE options.

CONTROLPLANE_SIZE: "medium"
WORKER_SIZE: "extra-large"

You can combine the CONTROLPLANE_SIZE and WORKER_SIZE options with the SIZE option. For example, if you specify SIZE: "large" with WORKER_SIZE: "extra-large", the control plane nodes will be set to large and worker nodes will be set to extra-large.

SIZE: "large"
WORKER_SIZE: "extra-large"

Custom Node Sizes

You can customize the configuration of the nodes rather than using the predefined configurations.

To use the same custom configuration for all nodes, specify the VSPHERE_NUM_CPUS, VSPHERE_DISK_GIB, and VSPHERE_MEM_MIB options.

VSPHERE_NUM_CPUS: 2
VSPHERE_DISK_GIB: 40
VSPHERE_MEM_MIB: 4096

To define different custom configurations for control plane nodes and worker nodes, specify the VSPHERE_CONTROL_PLANE_* and VSPHERE_WORKER_* options.

VSPHERE_CONTROL_PLANE_NUM_CPUS: 2
VSPHERE_CONTROL_PLANE_DISK_GIB: 20
VSPHERE_CONTROL_PLANE_MEM_MIB: 8192
VSPHERE_WORKER_NUM_CPUS: 4
VSPHERE_WORKER_DISK_GIB: 40
VSPHERE_WORKER_MEM_MIB: 4096

You can override these settings by using the SIZE, CONTROLPLANE_SIZE, and WORKER_SIZE options.

Configure NSX Advanced Load Balancer

Important

The configuration variables AVI_DISABLE_INGRESS_CLASS, AVI_DISABLE_STATIC_ROUTE_SYNC, AVI_INGRESS_DEFAULT_INGRESS_CONTROLLER do not work in TKG v2.3. To set any of these to true, their non-default value, see the workaround described in the Known Issue Some NSX ALB Configuration Variables Do Not Work in the release notes.

To use NSX Advanced Load Balancer, you must first deploy it in your vSphere environment. See Install NSX Advanced Load Balancer. After deploying NSX Advanced Load Balancer, configure a vSphere management cluster to use the load balancer.

For example:

AVI_ENABLE: true
AVI_CONTROL_PLANE_HA_PROVIDER: true
AVI_NAMESPACE: "tkg-system-networking"
AVI_DISABLE_INGRESS_CLASS: true
AVI_AKO_IMAGE_PULL_POLICY: IfNotPresent
AVI_ADMIN_CREDENTIAL_NAME: avi-controller-credentials
AVI_CA_NAME: avi-controller-ca
AVI_CONTROLLER: 10.185.10.217
AVI_USERNAME: "admin"
AVI_PASSWORD: "<password>"
AVI_CLOUD_NAME: "Default-Cloud"
AVI_SERVICE_ENGINE_GROUP: "Default-Group"
AVI_NSXT_T1LR:""
AVI_DATA_NETWORK: nsx-alb-dvswitch
AVI_DATA_NETWORK_CIDR: 10.185.0.0/20
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME: ""
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR: ""
AVI_CA_DATA_B64: LS0tLS1CRU[...]UtLS0tLQo=
AVI_LABELS: ""
AVI_DISABLE_STATIC_ROUTE_SYNC: true
AVI_INGRESS_DEFAULT_INGRESS_CONTROLLER: false
AVI_INGRESS_SHARD_VS_SIZE: ""
AVI_INGRESS_SERVICE_TYPE: ""
AVI_INGRESS_NODE_NETWORK_LIST: ""

By default, the management cluster and all workload clusters that it manages will use the load balancer. For information about how to configure the NSX Advanced Load Balancer variables, see NSX Advanced Load Balancer in the Configuration File Variable Reference.

NSX Advanced Load Balancer as a Control Plane Endpoint Provider

You can use NSX ALB as the control plane endpoint provider in Tanzu Kubernetes Grid. The following table describes the differences between NSX ALB and Kube-Vip, which is the default control plane endpoint provider in Tanzu Kubernetes Grid.

Kube-Vip NSX ALB
Sends Traffic to Single control plane node
Multiple control plane nodes
Requires configuring endpoint VIP Yes
No
Assigns VIP from the NSX ALB static IP pool

Configure NSX Routable Pods

If your vSphere environment uses NSX, you can configure it to implement routable, or NO_NAT, pods.

Note

NSX Routable Pods is an experimental feature in this release. Information about how to implement NSX Routable Pods will be added to this documentation soon.

#! ---------------------------------------------------------------------
#! NSX specific configuration for enabling NSX routable pods
#! ---------------------------------------------------------------------

# NSXT_POD_ROUTING_ENABLED: false
# NSXT_ROUTER_PATH: ""
# NSXT_USERNAME: ""
# NSXT_PASSWORD: ""
# NSXT_MANAGER_HOST: ""
# NSXT_ALLOW_UNVERIFIED_SSL: false
# NSXT_REMOTE_AUTH: false
# NSXT_VMC_ACCESS_TOKEN: ""
# NSXT_VMC_AUTH_HOST: ""
# NSXT_CLIENT_CERT_KEY_DATA: ""
# NSXT_CLIENT_CERT_DATA: ""
# NSXT_ROOT_CA_DATA: ""
# NSXT_SECRET_NAME: "cloud-provider-vsphere-nsxt-credentials"
# NSXT_SECRET_NAMESPACE: "kube-system"

Configure for IPv6

To configure a management cluster that supports IPv6 to an IPv6 networking environment:

  1. Prepare the environment as described in (Optional) Set Variables and Rules for IPv6.

  2. Set the following variables in the configuration file for the management cluster.

    • Set TKG_IP_FAMILY to ipv6.
    • Set VSPHERE_CONTROL_PLANE_ENDPOINT to a static IPv6 address.
    • (Optional) Set the CLUSTER_CIDR and SERVICE_CIDR. Defaults to fd00:100:64::/48 and fd00:100:96::/108 respectively.

Configure Multiple Availability Zones

You can configure a management or workload cluster that runs nodes in multiple availability zones (AZs) as described in Running Clusters Across Multiple Availability Zones

Prerequisites

To configure a cluster with nodes deployed across multiple AZs:

  • Set VSPHERE_REGION and VSPHERE_ZONE to the region and zone tag categories, k8s-region and k8s-zone.
  • Set VSPHERE_AZ_0, VSPHERE_AZ_1, VSPHERE_AZ_2 with the names of the VsphereDeploymentZone objects where the machines need to be deployed.
    • The VsphereDeploymentZone associated with VSPHERE_AZ_0 is the VSphereFailureDomain in which the machine deployment ending with md-0 gets deployed, similarly VSPHERE_AZ_1 is the VSphereFailureDomain in which the machine deployment ending with md-1 gets deployed, and VSPHERE_AZ_2 is the VSphereFailureDomain in which the machine deployment ending with md-2 gets deployed
    • If any of the AZ configs are not defined, then that machine deployment gets deployed without any VSphereFailureDomain
  • WORKER_MACHINE_COUNT sets the total number of workers for the cluster. The total number of workers are distributed in a round-robin fashion across the number of AZs specified
  • VSPHERE_AZ_CONTROL_PLANE_MATCHING_LABELS sets key/value selector labels for the AZs that cluster control plane nodes may deploy to.
    • Set this variable if VSPHERE_REGION and VSPHERE_ZONE are set.
    • The labels must exist in the VSphereDeploymentZone resources that you create.
    • These labels let you specify all AZs in a region and an environment without having to list them individually, for example: "region=us-west-1,environment=staging".

For the full list of options that you must specify when deploying clusters to vSphere, see the Configuration File Variable Reference.

Configure Node IPAM

TKG supports in-cluster Node IPAM for standalone management clusters on vSphere and the class-based workload clusters that they manage. For more information and current limitations, see Node IPAM in Creating and Managing TKG 2.3 Workload Clusters with the Tanzu CLI.

You cannot deploy a management cluster with Node IPAM directly from the installer interface; you must deploy it from a configuration file. But you can create the configuration file by running the installer interface, clicking Review Configuration > Export Configuration, and then editing the generated configuration file as described below.

To deploy a management cluster that uses in-cluster IPAM for its nodes:

  1. As a prerequisite, gather IP addresses of nameservers to use for the cluster’s control plane and worker nodes. This is required because cluster nodes will no longer receive nameservers via DHCP to resolve names in vCenter.

  2. Edit the management cluster configuration file to include settings like the following, as described in the Node IPAM table in the Configuration File Variable Reference:

    MANAGEMENT_NODE_IPAM_IP_POOL_GATEWAY: "10.10.10.1"
    MANAGEMENT_NODE_IPAM_IP_POOL_ADDRESSES: "10.10.10.2-10.10.10.100,10.10.10.105"
    MANAGEMENT_NODE_IPAM_IP_POOL_SUBNET_PREFIX: "24"
    CONTROL_PLANE_NODE_NAMESERVERS: "10.10.10.10,10.10.10.11"
    WORKER_NODE_NAMESERVERS: "10.10.10.10,10.10.10.11"
    

    Where CONTROL_PLANE_NODE_NAMESERVERS and WORKER_NODE_NAMESERVERS are the addresses of the nameservers to use. You can either specify a single name server or two comma-separated name servers. You might require more than one name server in environments that require redundancy. In this case, a node VM will only use a single nameserver at a time and secondary nameservers are used if the first nameserver fails to resolve. You might also specify more than one name server to independently change name servers for control plane and worker nodes respectively, to control service resolution on the different types of nodes.

Configure Cluster Node MTU

To configure the maximum transmission unit (MTU) for management and workload cluster nodes, set the VSPHERE_MTU variable. Setting VSPHERE_MTU is applicable to both management clusters and workload clusters.

If not specified, the default vSphere node MTU is 1500. The maximum value is 9000. For information about MTUs, see About vSphere Networking in the vSphere 8 documentation.

Management Clusters on vSphere with Tanzu

On vSphere 7 and vSphere 8, vSphere with Tanzu provides a built-in Supervisor that serves as a management cluster and provides a better experience than a standalone management cluster. Deploying a Tanzu Kubernetes Grid management cluster to vSphere 7 or vSphere 8 when the Supervisor is not present is supported, but the preferred option is to enable vSphere with Tanzu and use the Supervisor if possible. Azure VMware Solution does not support a Supervisor Cluster, so you need to deploy a management cluster. For information, see vSphere with Tanzu Supervisor is a Management Cluster.

If vSphere with Tanzu is enabled, the installer interface states that you can use the TKG Service as the preferred way to run Kubernetes workloads, in which case you do not need a standalone management cluster. It presents a choice:

  • Configure vSphere with Tanzu opens the vSphere Client so you can configure your Supervisor as described in Configuring and Managing a Supervisor in the vSphere 8 documentation.

  • Deploy TKG Management Cluster allows you to continue deploying a standalone management cluster, for vSphere 7 or vSphere 8, and as required for and Azure VMware Solution.

What to Do Next

After you have finished updating the management cluster configuration file, create the management cluster by following the instructions in Deploy Management Clusters from a Configuration File.

check-circle-line exclamation-circle-line close-line
Scroll to top icon