As a vSphere administrator, you can enable the Workload Management platform on a vSphere cluster by configuring the vSphere networking stack to provide connectivity to workloads. A Supervisor Cluster that is configured with vSphere networking supports the deployment of Tanzu Kubernetes clusters created by using the Tanzu Kubernetes Grid Service. It does not support running vSphere Pod or using the embedded Harbor Registry.

Caution: Do not disable vSphere DRS after you configure the Supervisor Cluster. Having DRS enabled at all times is a mandatory prerequisite for running workloads on the Supervisor Cluster. Disabling DRS leads to breaking your Tanzu Kubernetes clusters.



  1. From the home menu, select Workload Management.
  2. Select a licensing option for the Supervisor Cluster.
    • If you have a valid Tanzu Edition license, click Add License to add the license key to the license inventory of vSphere.

    • If you do not have a Tanzu edition license yet, enter the contact details so that you can receive communication from VMware and click Get Started.

    The evaluation period of a Supervisor Cluster lasts for 60 days. Within that period, you must assign a valid Tanzu Edition license to the cluster. If you added a Tanzu Edition license key, you can assign that key within the 60 day evaluation period once you complete the Supervisor Cluster setup.

  3. On the Workload Management screen, click Get Started again.
  4. Select a vCenter Server system, select vCenter Server Network, and click Next.
  5. Select a cluster from the list of compatible clusters.
  6. From the Control Plane Size page, select the size for the Kubernetes control plane VMs that will be created on each host from the cluster.

    The amount of resources that you allocate to control plane VMs determines the number of Kubernetes workloads that the Supervisor Cluster can manage.

  7. On the Load Balancer screen, select the load balancer you want to use. You can select NSX Advanced Load Balancer or HAProxy.
    • Enter the following settings for NSX Advanced Load Balancer:




      Enter a name for the NSX Advanced Load Balancer.

      Avi Controller IP

      The IP address of the NSX Advanced Load Balancer Controller.

      The default port is 443.

      User name

      The user name that is configured with the NSX Advanced Load Balancer. You use this username to access the Controller.


      The password for the user name.

      Server Certificate Authority

      The certificate used by the Controller.

      You can provide the certificate that you assigned during the configuration.

      For more information, see Assign a Certificate to the Controller.

    • Enter the following settings for HAProxy:




      A user-friendly name for the load balancer.

      Data plane API Address(es)

      The IP address and port of the HAProxy Data Plane API. This component controls the HAProxy server and runs inside the HAProxy VM. This is the management network IP address of the HAProxy appliance.

      User name

      The user name that is configured with the HAProxy OVA file. You use this name to authenticate with the HAProxy Data Plane API.


      The password for the user name.

      IP Address Ranges for Virtual Servers

      Range of IP addresses that is used in the Workload Network by Tanzu Kubernetes clusters. This IP range comes from the list of IPs that were defined in the CIDR you configured during the HAProxy appliance deployment. Typically this will be the entire range specified in the HAProxy deployment, but it can also be a subset of that CIDR because you may create multiple Supervisor Clusters and use IPs from that CIDR range. This range must not overlap with the IP range defined for the Workload Network in this wizard. The range must also not overlap with any DHCP scope on this Workload Network.

      Server Certificate Authority

      The certificate in PEM format that is signed or is a trusted root of the server certificate that the Data Plane API presents.

      • Option 1: If root access is enabled, SSH to the HAProxy VM as root and copy /etc/haproxy/ca.crt to the Server Certificate Authority. Do not use escape lines in the \n format.

      • Option 2: Right-click the HAProxy VM and select Edit Settings. Copy the CA cert from the appropriate field and convert it from Base64 using an conversion tool such as

      • Option 3: Run the following PowerCLI script. Replace the variables $vc,$vc_user, and $vc_password with appropriate values.

        $vc = ""
        $vc_user = "administrator@vsphere.local"
        $vc_password = "PASSWORD"
        Connect-VIServer -User $vc_user -Password $vc_password -Server $vc
        $VMname = "haproxy-demo"
        $AdvancedSettingName = "guestinfo.dataplaneapi.cacert"
        $Base64cert = get-vm $VMname |Get-AdvancedSetting -Name $AdvancedSettingName
        while ([string]::IsNullOrEmpty($Base64cert.Value)) {
             Write-Host "Waiting for CA Cert Generation... This may take a under 5-10
        minutes as the VM needs to boot and generate the CA Cert
        (if you haven't provided one already)."
             $Base64cert = get-vm $VMname |Get-AdvancedSetting -Name $AdvancedSettingName
             Start-sleep -seconds 2
             Write-Host "CA Cert Found... Converting from BASE64" 
             $cert = [Text.Encoding]::Utf8.GetString([Convert]::FromBase64String($Base64cert.Value))
        Write-Host $cert
  8. On the Management Network screen, configure the parameters for the network that will be used for Kubernetes control plane VMs.
    1. Select a Network Mode.
      • DHCP Network. In this mode, all the IP addresses for the management network, such as control plane VM IP addresses, a Floating IP, DNS servers, DNS, search domains, and NTP server are acquired automatically from a DHCP server. To obtain floating IP addresses, the DHCP server must be configured to support client identifiers.
        Note: In DHCP mode, all control plane VMs use stable DHCP client identifiers to acquire IP addresses. These client identifiers can be used to setup static IP assignment for the IP addresses of the control plane VMs on the DHCP server to ensure they do not change. Changing the IP addresses of control plane VMs and floating IP addresses is not supported. Failure to reserve the floating IP addresses by its DHCP Unique Identifier might result in unrecoverable loss of connectivity to the Supervisor Control plane.
      • Static. Manually enter all networking settings for the management network.

    2. Configure the settings for the management network.

      If you have selected the DHCP network mode, but you want to override the settings acquired from DHCP, click Additional Settings and enter new values. If you have selected the static network mode, fill in the values for the management network settings manually.

      Option Description


      Select a network that has a VMkernel adapter configured for the management traffic.

      Starting Control IP address

      Enter an IP address that determines the starting point for reserving five consecutive IP addresses for the Kubernetes control plane VMs as follows:

      • An IP address for each of the Kubernetes control plane VMs.

      • A floating IP address for one of the Kubernetes control plane VMs to serve as an interface to the management network. The control plane VM that has the floating IP address assigned acts as a leading VM for all three Kubernetes control plane VMs. The floating IP moves to the control plane node that is the etcd leader in this Kubernetes cluster. This improves availability in the case of a network partition event.

      • An IP address to serve as a buffer in case a Kubernetes control plane VM fails and a new control plane VM is being brought up to replace it.

      Subnet Mask

      Only applicable for static IP configuration. Enter the subnet mask for the management network.

      For example,

      DNS Servers

      Enter the addresses of the DNS servers that you use in your environment. If the vCenter Server system is registered with an FQDN, you must enter the IP addresses of the DNS servers that you use with the vSphere environment so that the FQDN is resolvable in the Supervisor Cluster.

      DNS Search Domains

      Enter domain names that DNS searches inside the Kubernetes control plane nodes, such as corp.local, so that the DNS server can resolve them.


      Enter the addresses of the NTP servers that you use in your environment, if any.

  9. In the Workload Network page, enter the settings for the network that will handle the networking traffic for Kubernetes workloads running on the Supervisor Cluster.

    If you select using a DHCP server to provide the networking settings for Workload Networks, you will not be able to create any new Workload Networks once you complete the Supervisor Cluster configuration.

    1. Select a network mode.
      • DHCP Network. In this network mode, all networking settings for Workload Networks are acquired through DHCP.

      • Static. Manually configure Workload Network settings.

    2. Select the port group that will serve as the Primary Workload Network to the Supervisor Cluster

      The primary network handles the traffic for the Kubernetes control plane VMs and Kubernetes workload traffic.

      Depending on your networking topology, you can later assign a different port group to serve as the network to each namespace. This way, you can provide layer 2 isolation between the namespaces in the Supervisor Cluster. Namespaces that do not have a different port group assigned as their network use the primary network. Tanzu Kubernetes clusters use only the network that is assigned to the namespace where they are deployed or they use the primary network if there is no explicit network assigned to that namespace

    3. Configure the settings for workload networks.

      If you have selected the DHCP network mode, all the values under the Additional Settings section are automatically filled in from the DHCP server. If You want to override these values, click Additional Settings and enter new values. If you have selected the Static network mode, fill in all the settings manually.

    Option Description

    Internal Network for Kubernetes Services

    Enter a CIDR notation that determines the range of IP addresses for Tanzu Kubernetes clusters and services that run inside the clusters.

    Network Name

    Enter the network name.

    DNS Server

    Enter the IP addresses of the DNS servers that you use with your environment, if any.

    For example,

    When you enter IP address of the DNS server, a static route is added on each control plane VM. This indicates that the traffic to the DNS servers goes through the workload network.

    If the DNS servers that you specify are shared between the management network and workload network, the DNS lookups on the control plane VMs are routed through the workload network after initial setup.


    Enter the gateway for the primary network.

    Subnet Mask IP

    Enter the subnet mask IP address.

    IP Address Ranges

    Enter an IP range for allocating IP address of Kubernetes control plane VMs and workloads.

    This address range connects the Supervisor Cluster nodes and, in the case of a single Workload Network, also connects the Tanzu Kubernetes cluster nodes. This IP range must not overlap with the load balancer VIP range when using the Default configuration for HAProxy.

  10. On the Storage page, configure storage and file volume support.
    1. Select a storage policy for the Supervisor Cluster.
      The storage policy you select ensures that the control plane VMs of the Supervisor Cluster are placed on the datastore referenced in the storage policy.
      Option Description

      Control Plane Node

      Select the storage policy for placement of the control plane VMs.

    2. (Optional) Activate file volume support.
      This option is required if you plan to deploy ReadWriteMany persistent volumes on a cluster. See Creating ReadWriteMany Persistent Volumes in vSphere with Tanzu.
  11. On the Tanzu Kubernetes Grid page, click Add and select the subscribed content library that contains the VM images for deploying the nodes of Tanzu Kubernetes clusters.
  12. Review your settings and click Finish.


A task runs on vCenter Server that creates the Supervisor Cluster. Once the task completes, three Kubernetes control plane VMs are created on the hosts that are part of the vSphere cluster.

What to do next

Create and configure vSphere Namespaces on the Supervisor Cluster. See Create and Configure a vSphere Namespace.