Learn how to deploy a Supervisor with NSX networking on one vSphere cluster that maps to a vSphere Zone. The resulting Supervisor will have host-level hight availability provided by vSphere HA. A one-zone Supervisor supports all Tanzu Kubernetes clusters, VMs, and vSphere Pods.

If you have configured NSX version 4.1.1 or later, and have installed, configured, and registered the NSX Advanced Load Balancer version 22.1.4 or later with Enterprise license on NSX, the load balancer that will be used with NSX is NSX Advanced Load Balancer. If you have configured versions of NSX earlier than 4.1.1, the NSX load balancer will be used. For information, see Verify the Load Balancer used with NSX Networking.
Note: Once you deploy a Supervisor on single vSphere cluster, which will result in creating one vSphere Zone, you cannot expand the Supervisor to a three-zone deployment. You can either deploy a Supervisor on one vSphere Zone (single-cluster deployment) or on three vSphere Zones.


Verify that your environment meets the prerequisites for configuring a vSphere cluster as a Supervisor. For information about requirements, see Prerequisites for Configuring vSphere with Tanzu on vSphere Clusters.


  1. From the home menu, select Workload Management.
  2. Select a licensing option for the Supervisor.
    • If you have a valid Tanzu Edition license, click Add License to add the license key to the license inventory of vSphere.

    • If you do not have a Tanzu edition license yet, enter the contact details so that you can receive communication from VMware and click Get Started.

    The evaluation period of a Supervisor lasts for 60 days. Within that period, you must assign a valid Tanzu Edition license to the cluster. If you added a Tanzu Edition license key, you can assign that key within the 60 day evaluation period once you complete the Supervisor setup.

  3. On the Workload Management screen, click Get Started again.
  4. On the vCenter Server and Network page, select the vCenter Server system that is setup for Supervisor deployment and select NSX as the networking stack.
  5. On the Supervisor location page, select Cluster Deployment.
    1. Enter a name for the new Supervisor .
    2. Select a compatible vSphere Cluster.
    3. Enter a name for the vSphere Zone that will be automatically created for the cluster you select.
      If you don't provide a name for the zone, one will be automatically generated for it.
    4. Click Next.
  6. Select storage policies for the Supervisor.
    The storage policy you select for each of the following objects ensures that the object is placed on the datastore referenced in the storage policy. You can use the same or different storage policies for the objects.
    Option Description
    Control Plane Storage Policy Select the storage policy for placement of the control plane VMs.

    Ephemeral Disks Storage Policy

    Select the storage policy for placement of the vSphere Pods.

    Image Cache Storage Policy

    Select the storage policy for placement of the cache of container images.
  7. On the Management Network screen, configure the parameters for the network that will be used for Kubernetes control plane VMs.
    1. Select a Network Mode.
      • DHCP Network. In this mode, all the IP addresses for the management network, such as control plane VM IPs, DNS servers, DNS, search domains, and NTP server are acquired automatically from a DHCP.
      • Static. Manually enter all networking settings for the management network.
    2. Configure the settings for the management network.
      If you have selected the DHCP network mode, but you want to override the settings acquired from DHCP, click Additional Settings and enter new values. If you have selected the static network mode, fill in the values for the management network settings manually.
      Option Description
      Network Select a network that has a VMkernel adapter configured for the management traffic.
      Starting Control IP address Enter an IP address that determines the starting point for reserving five consecutive IP addresses for the Kubernetes control plane VMs as follows:
      • An IP address for each of the Kubernetes control plane VMs.
      • A floating IP address for one of the Kubernetes control plane VMs to serve as an interface to the management network. The control plane VM that has the floating IP address assigned acts as a leading VM for all three Kubernetes control plane VMs. The floating IP moves to the control plane node that is the etcd leader in this Kubernetes cluster, which is the Supervisor. This improves availability in the case of a network partition event.
      • An IP address to serve as a buffer in case a Kubernetes control plane VM fails and a new control plane VM is being brought up to replace it.
      Subnet Mask Only applicable for static IP configuration. Enter the subnet mask for the management network.

      For example,

      DNS Servers Enter the addresses of the DNS servers that you use in your environment. If the vCenter Server system is registered with an FQDN, you must enter the IP addresses of the DNS servers that you use with the vSphere environment so that the FQDN is resolvable in the Supervisor.
      DNS Search Domains Enter domain names that DNS searches inside the Kubernetes control plane nodes, such as corp.local, so that the DNS server can resolve them.
      NTP Enter the addresses of the NTP servers that you use in your environment, if any.
  8. In the Workload Network pane, configure settings for the networks for namespaces.
    Option Description
    vSphere Distributed Switch Select the vSphere Distributed Switch that handles overlay networking for the Supervisor.

    For example, select DSwitch.

    DNS Server Enter the IP addresses of the DNS servers that you use with your environment, if any.

    For example,

    NAT Mode The NAT mode is selected by default.

    If you deselect the option, all the workloads such as the vSphere Pods, VMs, and Tanzu Kubernetes clusters Node IP addresses are directly accessible from outside the tier-0 gateway and you do not have to configure the egress CIDRs.

    Note: If you deselect NAT mode, File Volume storage is not supported.
    Namespace Network Enter one or more IP CIDRs to create subnets/segments and assign IP addresses to workloads.
    Ingress CIDRs Enter a CIDR annotation that determines the ingress IP range for the Kubernetes services. This range is used for services of type load balancer and ingress.
    Edge Cluster Select the NSX Edge cluster that has the tier-0 gateway that you want to use for namespace networking.

    For example, select EDGE-CLUSTER.

    Tier-0 Gateway Select the tier-0 gateway to associate with the cluster tier-1 gateway.
    Subnet Prefix Enter the subnet prefix that specifies the size of the subnet reserved for namespaces segments. Default is 28.
    Service CIDRs Enter a CIDR annotation to determine the IP range for Kubernetes services. You can use the default value.
    Egress CIDRs Enter a CIDR annotation that determines the egress IP for Kubernetes services. Only one egress IP address is assigned for each namespace in the Supervisor. The egress IP is the IP address that the Kubernetes workloads in the particular namespace use to communicate outside of NSX.
  9. In the Review and Confirm page, scroll up and review all the settings that you configured so far and set advanced settings for the Supervisor deployment.
    Option Description
    Supervisor Control Plane Size Select the select the sizing for the control plane VMs. The size of the control plane VMs determines the amount of workloads that you can run on the Supervisor. You can select from:
    • Tiny - 2 CPUs, 8 GB Memory, 32 GB Storage
    • Small - 4 CPUs, 16 GB Memory, 32 GB Storage
    • Medium - 8 CPUs, 16 GB Memory, 32 GB Storage
    • Large - 16 CPUs, 32 GB Memory, 32 GB Storage
    Note: Once you select a control plane size, you can only scale up. You cannot scale down to smaller size.
    API Server DNS Names Optionally, enter the FQDNs that will be used to access the Supervisor control plane, instead of using the Supervisor control plane IP address. The FQDNs that you enter will be embedded into an automatically-generated certificate. By using FQDNs for the Supervisor, you can omit specifying an IP sand in the load balancer certificate.
    Export Configuration Export a JSON file containing the values of the Supervisor configuration that you have entered.

    You can later modify and import the file if you want to redeploy the Supervisor or if you want to deploy a new Supervisor with similar configuration.

    Exporting the Supervisor configuration can save you time from entering all the configuration values in this wizard anew in case of Supervisor redeployment.

  10. Click Finish when ready with reviewing the settings.
    The deployment of the Supervisor initiates the creation and configuration of the control plane VMs and other components.
  11. In the Supervisors tab, track the deployment process of the Supervisor.
    1. In the Config Status column, click view next to the status of the Supervisor.
    2. View the configuration status for each object and track for any potential issues to troubleshoot.

What to do next

Once you complete the wizard for enabling a Supervisor, you can track the activation process and observe potential issues that require troubleshooting. In the Config Status column, click view next to the status of the Supervisor.
Figure 1. Supervisor activation view

The Supervisor enablement view displays the conditions in the enablement process and the respective status.

For the deployment process to complete, the Supervisor must reach the desired state, which means that all conditions are reached. When a Supervisor is successfully enabled, its status changes from Configuring to Running. While the Supervisor is in Configuring state, reaching each of the conditions is retried continuously. If a condition is not reached, the operation is retried until success. Because of this reason, the number of conditions that are reached could change back and forth, for example 10 out of 16 conditions reached and then 4 out of 16 conditions reached and so on. In very rare cases the status could change to Error, if there are errors that prevent reaching the desired state.

For more information on the deployment errors and how to troubleshoot them, see Resolving Errors Health Statuses on Supervisor Cluster During Initial Configuration Or Upgrade.

In case you want to attempt redeploying the Supervisor by altering the configuration values that you have entered in the wizard, checkout Deploy a Supervisor by Importing a JSON Configuration File.