Check out how to deploy a Supervisor with NSX on three vSphere Zones. Each vSphere Zone maps to one vSphere cluster. By deploying the Supervisor on three vSphere zones, you provide high-availability to your workloads on cluster level. A three-zone Supervisor configured with NSX supports TKG clusters,VMs created by using the VM service, and vSphere Pods.

If you have configured NSX version 4.1.1 or later, and have installed, configured, and registered the NSX Advanced Load Balancer version 22.1.4 or later with Enterprise license on NSX, the load balancer that will be used with NSX is NSX Advanced Load Balancer. If you have configured versions of NSX earlier than 4.1.1, the NSX load balancer will be used. For information, see Verify the Load Balancer used with NSX Networking.

Prerequisites

Procedure

  1. From the home menu, select Workload Management.
  2. Select a licensing option for the Supervisor.
    • If you have a valid Tanzu Edition license, click Add License to add the license key to the license inventory of vSphere.

    • If you do not have a Tanzu edition license yet, enter the contact details so that you can receive communication from VMware and click Get Started.

    The evaluation period of a Supervisor lasts for 60 days. Within that period, you must assign a valid Tanzu Edition license to the cluster. If you added a Tanzu Edition license key, you can assign that key within the 60 day evaluation period once you complete the Supervisor setup.

  3. On the Workload Management screen, click Get Started again.
  4. On the vCenter Server and Network page, select the vCenter Server system that is setup for Supervisor deployment and select NSX as the networking stack.
  5. Click Next.
  6. On the Supervisor location page, select vSphere Zone Deployment to deploy a Supervisor on three vSphere Zones.
    1. Enter a name for the new Supervisor.
    2. Select the data center where you have created the vSphere Zones for deploying the Supervisor.
    3. From the list of compatible vSphere Zones, select three zones.
    4. Click Next.
  7. Select storage policies for the Supervisor.
    Option Description
    Control Plane Storage Policy Select the storage policy for placement of the control plane VMs.

    Ephemeral Disks Storage Policy

    This option is disabled because vSphere Pods aren't supported with a 3-zone Supervisor.

    Image Cache Storage Policy

    This option is disabled because vSphere Pods aren't supported with a 3-zone Supervisor.
  8. Click Next.
  9. On the Management Network screen, configure the parameters for the network that will be used for Kubernetes control plane VMs.
    1. Select a Network Mode.
      • DHCP Network. In this mode, all the IP addresses for the management network, such as control plane VM IPs, a Floating IP, DNS servers, DNS, search domains, and NTP server are acquired automatically from a DHCP server. To obtain floating IPs, the DHCP server must be configured to support client identifiers. In DHCP mode, all control plane VMs use stable DHCP client identifiers to acquire IP addresses. These client identifiers can be used to setup static IP assignment for the IPs of control plane VMs on the DHCP server to ensure they do not change. Changing the IPs of control plane VMs and floating IPs is not supported.
        You can override some of the settings inherited from DHCP by entering values in the text fields for these settings.
        Option Description
        Network Select the network that will handle the management traffic for the Supervisor
        Floating IP

        Enter an IP address that determines the starting point for reserving five consecutive IP addresses for the Kubernetes control plane VMs as follows:

        • An IP address for each of the Kubernetes control plane VMs.

        • A floating IP address for one of the Kubernetes control plane VMs to serve as an interface to the management network. The control plane VM that has the floating IP address assigned acts as a leading VM for all three Kubernetes control plane VMs. The floating IP moves to the control plane node that is the etcd leader in the Kubernetes cluster. This improves availability in the case of a network partition event.

        • An IP address to serve as a buffer in case a Kubernetes control plane VM fails and a new control plane VM is being brought up to replace it.

        DNS Servers Enter the addresses of the DNS servers that you use in your environment. If the vCenter Server system is registered with an FQDN, you must enter the IP addresses of the DNS servers that you use with the vSphere environment so that the FQDN is resolvable in the Supervisor.
        DNS Search Domains Enter domain names that DNS searches inside the Kubernetes control plane nodes, such as corp.local, so that the DNS server can resolve them.
        NTP Servers Enter the addresses of the NTP servers that you use in your environment, if any.
      • Static. Manually enter all networking settings for the management network.
        Option Description
        Network Select the network that will handle the management traffic for the Supervisor
        Staring IP Address

        Enter an IP address that determines the starting point for reserving five consecutive IP addresses for the Kubernetes control plane VMs as follows:

        • An IP address for each of the Kubernetes control plane VMs.

        • A floating IP address for one of the Kubernetes control plane VMs to serve as an interface to the management network. The control plane VM that has the floating IP address assigned acts as a leading VM for all three Kubernetes control plane VMs. The floating IP moves to the control plane node that is the etcd leader in thes Kubernetes cluster. This improves availability in the case of a network partition event.

        • An IP address to serve as a buffer in case a Kubernetes control plane VM fails and a new control plane VM is being brought up to replace it.

        Subnet Mask Only applicable for static IP configuration. Enter the subnet mask for the management network.

        For example, 255.255.255.0

        Gateway Enter a gateway for the management network.
        DNS Servers Enter the addresses of the DNS servers that you use in your environment. If the vCenter Server system is registered with an FQDN, you must enter the IP addresses of the DNS servers that you use with the vSphere environment so that the FQDN is resolvable in the Supervisor.
        DNS Search Domains Enter domain names that DNS searches inside the Kubernetes control plane nodes, such as corp.local, so that the DNS server can resolve them.
        NTP Servers Enter the addresses of the NTP servers that you use in your environment, if any.
    2. Click Next.
  10. In the Workload Network pane, configure settings for the networks for namespaces.
    Option Description
    vSphere Distributed Switch Select the vSphere Distributed Switch that handles overlay networking for the Supervisor.

    For example, select DSwitch.

    DNS Server Enter the IP addresses of the DNS servers that you use with your environment, if any.

    For example, 10.142.7.1.

    NAT Mode The NAT mode is selected by default.

    If you deselect the option, all the workloads such as the vSphere Pods, VMs, and TKG clusters Node IP addresses are directly accessible from outside the tier-0 gateway and you do not have to configure the egress CIDRs.

    Note: If you deselect NAT mode, File Volume storage is not supported.
    Namespace Network Enter one or more IP CIDRs to create subnets/segments and assign IP addresses to workloads.
    Ingress CIDRs Enter a CIDR annotation that determines the ingress IP range for the Kubernetes services. This range is used for services of type load balancer and ingress.
    Edge Cluster Select the NSX Edge cluster that has the tier-0 gateway that you want to use for namespace networking.

    For example, select EDGE-CLUSTER.

    Tier-0 Gateway Select the tier-0 gateway to associate with the cluster tier-1 gateway.
    Subnet Prefix Enter the subnet prefix that specifies the size of the subnet reserved for namespaces segments. Default is 28.
    Service CIDRs Enter a CIDR annotation to determine the IP range for Kubernetes services. You can use the default value.
    Egress CIDRs Enter a CIDR annotation that determines the egress IP for Kubernetes services. Only one egress IP address is assigned for each namespace in the Supervisor. The egress IP is the IP address that the Kubernetes workloads in the particular namespace use to communicate outside of NSX.
  11. Click Next.
  12. In the Review and Confirm page, scroll up and review all the settings that you configured so far and set advanced settings for the Supervisor deployment.
    Option Description
    Supervisor Control Plane Size Select the select the sizing for the control plane VMs. The size of the control plane VMs determines the amount of workloads that you can run on the Supervisor. You can select from:
    • Tiny - 2 CPUs, 8 GB Memory, 32 GB Storage
    • Small - 4 CPUs, 16 GB Memory, 32 GB Storage
    • Medium - 8 CPUs, 16 GB Memory, 32 GB Storage
    • Large - 16 CPUs, 32 GB Memory, 32 GB Storage
    Note: Once you select a control plane size, you can only scale up. You cannot scale down to smaller size.
    API Server DNS Names Optionally, enter the FQDNs that will be used to access the Supervisor control plane, instead of using the Supervisor control plane IP address. The FQDNs that you enter will be embedded into an automatically-generated certificate. By using FQDNs for the Supervisor, you can omit specifying an IP sand in the load balancer certificate.
    Export Configuration Export a JSON file containing the values of the Supervisor configuration that you have entered.

    You can later modify and import the file if you want to redeploy the Supervisor or if you want to deploy a new Supervisor with similar configuration.

    Exporting the Supervisor configuration can save you time from entering all the configuration values in this wizard anew in case of Supervisor redeployment.

  13. Click Finish when ready with reviewing the settings.
    The enablement of the Supervisor initiates the creation and configuration of the control plane VMs and other components.

What to do next

Once you complete the wizard for enabling a Supervisor, you can track the activation process and observe potential issues that require troubleshooting. In the Config Status column, click view next to the status of the Supervisor.
Figure 1. Supervisor activation view

The Supervisor enablement view displays the conditions in the enablement process and the respective status.

For the deployment process to complete, the Supervisor must reach the desired state, which means that all conditions are reached. When a Supervisor is successfully enabled, its status changes from Configuring to Running. While the Supervisor is in Configuring state, reaching each of the conditions is retried continuously. If a condition is not reached, the operation is retried until success. Because of this reason, the number of conditions that are reached could change back and forth, for example 10 out of 16 conditions reached and then 4 out of 16 conditions reached and so on. In very rare cases the status could change to Error, if there are errors that prevent reaching the desired state.

For more information on the deployment errors and how to troubleshoot them, see Resolving Errors Health Statuses on Supervisor Cluster During Initial Configuration Or Upgrade.

In case you want to attempt redeploying the Supervisor by altering the configuration values that you have entered in the wizard, checkout Deploy a Supervisor by Importing a JSON Configuration File.