Checkout how to deploy a one-zone Supervisor with the VDS networking stack and with HA Proxy load balancer or NSX Advanced Load Balancer. A one-zone Supervisor that is configured with VDS networking supports the deployment of Tanzu Kubernetes clusters created by using the Tanzu Kubernetes Grid, VMs created by using the VM Service, and vSphere Pods.

Note: Once you deploy a Supervisor on single vSphere cluster, which will result in creating one vSphere Zone, you cannot expand the Supervisor to a three-zone deployment. You can either deploy a Supervisor on one vSphere Zone (single-cluster deployment) or on three vSphere Zones.

Prerequisites

Procedure

  1. From the home menu, select Workload Management.
  2. Select a licensing option for the Supervisor.
    • If you have a valid Tanzu Edition license, click Add License to add the license key to the license inventory of vSphere.

    • If you do not have a Tanzu edition license yet, enter the contact details so that you can receive communication from VMware and click Get Started.

    The evaluation period of a Supervisor lasts for 60 days. Within that period, you must assign a valid Tanzu Edition license to the cluster. If you added a Tanzu Edition license key, you can assign that key within the 60 day evaluation period once you complete the Supervisor setup.

  3. On the Workload Management screen, click Get Started again.
  4. Select the vCenter Server and Network page, select the vCenter Server system that is setup for Supervisor deployment, select vSphere Distributed Switch (VDS) as the networking stack, and click Next.
  5. To enable a one-zone Supervisor, select CLUSTER DEPLOYMENT on the Supervisor location page.
    Enabling workload management on a one-zone Supervisor automatically creates a vSphere Zone and assigns the cluster to the zone.
  6. Select a cluster from the list of compatible clusters.
  7. Enter a name for the Supervisor.
  8. (Optional) Enter a name for the vSphere Zone and click NEXT.
    If you do not enter a name for the vSphere Zone, a name is automatically assigned and you cannot change the name later.
  9. On the Storage page, configure storage for placement of control plane VMs.
    Option Description

    Control Plane Node

    Select the storage policy for placement of the control plane VMs.

  10. On the Load Balancer screen, configure the settings for a load balancer.
    1. Enter a name for the load balancer.
    2. Select the load balancer type.
      You can select from NSX Advanced Load Balancer and HAProxy.
    3. Configure the settings for the load balancer
      • Enter the following settings for NSX Advanced Load Balancer:

        Option

        Description

        Name

        Enter a name for the NSX Advanced Load Balancer.

        NSX Advanced Load Balancer Controller Endpoint

        The IP address of the NSX Advanced Load Balancer Controller.

        The default port is 443.

        User name

        The user name that is configured with the NSX Advanced Load Balancer. You use this user name to access the Controller.

        Password

        The password for the user name.

        Server Certificate

        The certificate used by the Controller.

        You can provide the certificate that you assigned during the configuration.

        For more information, see Assign a Certificate to the Controller.

        Cloud Name Enter the name of the custom cloud that you set up. Note that the cloud name is case sensitive.

        To use Default-Cloud, leave this field empty.

        For more information, see Configure the Controller.

      • Enter the following settings for HAProxy:

        Option

        Description

        HAProxy Load Balancer Controller Endpoint

        The IP address and port of the HAProxy Data Plane API, which is the management IP address of the HAProxy appliance. This component controls the HAProxy server and runs inside the HAProxy VM.

        Username

        The user name that is configured with the HAProxy OVA file. You use this name to authenticate with the HAProxy Data Plane API.

        Password

        The password for the user name.

        Virtual IP Ranges

        Range of IP addresses that is used in the Workload Network by Tanzu Kubernetes clusters. This IP range comes from the list of IPs that were defined in the CIDR you configured during the HAProxy appliance deployment. You can set the entire range configured in the HAProxy deployment, but you can also set a subset of that CIDR if you want to create multiple Supervisors and use IPs from that CIDR range. This range must not overlap with the IP range defined for the Workload Network in this wizard. The range must also not overlap with any DHCP scope on this Workload Network.

        HAProxy Management TLS Certificate

        The certificate in PEM format that is signed or is a trusted root of the server certificate that the Data Plane API presents.

        • Option 1: If root access is enabled, SSH to the HAProxy VM as root and copy /etc/haproxy/ca.crt to the Server Certificate Authority. Do not use escape lines in the \n format.

        • Option 2: Right-click the HAProxy VM and select Edit Settings. Copy the CA cert from the appropriate field and convert it from Base64 using a conversion tool such as https://www.base64decode.org/.

        • Option 3: Run the following PowerCLI script. Replace the variables $vc,$vc_user, and $vc_password with appropriate values.

          $vc = "10.21.32.43"
          $vc_user = "[email protected]"
          $vc_password = "PASSWORD"
          Connect-VIServer -User $vc_user -Password $vc_password -Server $vc
          $VMname = "haproxy-demo"
          $AdvancedSettingName = "guestinfo.dataplaneapi.cacert"
          $Base64cert = get-vm $VMname |Get-AdvancedSetting -Name $AdvancedSettingName
          while ([string]::IsNullOrEmpty($Base64cert.Value)) {
               Write-Host "Waiting for CA Cert Generation... This may take a under 5-10
          minutes as the VM needs to boot and generate the CA Cert
          (if you haven't provided one already)."
               $Base64cert = get-vm $VMname |Get-AdvancedSetting -Name $AdvancedSettingName
               Start-sleep -seconds 2
               }
               Write-Host "CA Cert Found... Converting from BASE64" 
               $cert = [Text.Encoding]::Utf8.GetString([Convert]::FromBase64String($Base64cert.Value))
          Write-Host $cert
  11. On the Management Network screen, configure the parameters for the network that will be used for Kubernetes control plane VMs.
    1. Select a Network Mode.
      • DHCP Network. In this mode, all the IP addresses for the management network, such as control plane VM IPs, a Floating IP, DNS servers, DNS, search domains, and NTP server are acquired automatically from a DHCP server. To obtain floating IPs, the DHCP server must be configured to support client identifiers. In DHCP mode, all control plane VMs use stable DHCP client identifiers to acquire IP addresses. These client identifiers can be used to setup static IP assignment for the IPs of control plane VMs on the DHCP server to ensure they do not change. Changing the IPs of control plane VMs as well as floating IPs is not supported.
        You can override some of the settings inherited from DHCP by entering values in the text fields for these settings.
        Option Description
        Network Select the network that will handle the management traffic for the Supervisor
        Floating IP

        Enter an IP address that determines the starting point for reserving five consecutive IP addresses for the Kubernetes control plane VMs as follows:

        • An IP address for each of the Kubernetes control plane VMs.

        • A floating IP address for one of the Kubernetes control plane VMs to serve as an interface to the management network. The control plane VM that has the floating IP address assigned acts as a leading VM for all three Kubernetes control plane VMs. The floating IP moves to the control plane node that is the etcd leader in the Kubernetes cluster. This improves availability in the case of a network partition event.

        • An IP address to serve as a buffer in case a Kubernetes control plane VM fails and a new control plane VM is being brought up to replace it.

        DNS Servers Enter the addresses of the DNS servers that you use in your environment. If the vCenter Server system is registered with an FQDN, you must enter the IP addresses of the DNS servers that you use with the vSphere environment so that the FQDN is resolvable in the Supervisor.
        DNS Search Domains Enter domain names that DNS searches inside the Kubernetes control plane nodes, such as corp.local, so that the DNS server can resolve them.
        NTP Servers Enter the addresses of the NTP servers that you use in your environment, if any.
      • Static. Manually enter all networking settings for the management network.
        Option Description
        Network Select the network that will handle the management traffic for the Supervisor
        Staring IP Address

        Enter an IP address that determines the starting point for reserving five consecutive IP addresses for the Kubernetes control plane VMs as follows:

        • An IP address for each of the Kubernetes control plane VMs.

        • A floating IP address for one of the Kubernetes control plane VMs to serve as an interface to the management network. The control plane VM that has the floating IP address assigned acts as a leading VM for all three Kubernetes control plane VMs. The floating IP moves to the control plane node that is the etcd leader in thes Kubernetes cluster. This improves availability in the case of a network partition event.

        • An IP address to serve as a buffer in case a Kubernetes control plane VM fails and a new control plane VM is being brought up to replace it.

        Subnet Mask Only applicable for static IP configuration. Enter the subnet mask for the management network.

        For example, 255.255.255.0

        Gateway Enter a gateway for the management network.
        DNS Servers Enter the addresses of the DNS servers that you use in your environment. If the vCenter Server system is registered with an FQDN, you must enter the IP addresses of the DNS servers that you use with the vSphere environment so that the FQDN is resolvable in the Supervisor.
        DNS Search Domains Enter domain names that DNS searches inside the Kubernetes control plane nodes, such as corp.local, so that the DNS server can resolve them.
        NTP Servers Enter the addresses of the NTP servers that you use in your environment, if any.
    2. Click Next.
  12. In the Workload Network page, enter the settings for the network that will handle the networking traffic for Kubernetes workloads running on the Supervisor.
    Note:

    If you select using a DHCP server to provide the networking settings for Workload Networks, you will not be able to create any new Workload Networks once you complete the Supervisor configuration.

    1. Select a network mode.
      • DHCP Network. In this network mode, all networking settings for Workload Networks are acquired through DHCP. You can also override some of the settings inherited from DHCP by entering values in the text fields for these settings:
        Option Description
        Internal Network for Kubernetes Services Enter a CIDR notation that determines the range of IP addresses for Tanzu Kubernetes clusters and services that run inside the clusters.
        Port Group Select the port group that will serve as the Primary Workload Network to the Supervisor

        The primary network handles the traffic for the Kubernetes control plane VMs and Kubernetes workload traffic.

        Depending on your networking topology, you can later assign a different port group to serve as the network to each namespace. This way, you can provide layer 2 isolation between the namespaces in the Supervisor. Namespaces that do not have a different port group assigned as their network use the primary network. Tanzu Kubernetes clusters use only the network that is assigned to the namespace where they are deployed or they use the primary network if there is no explicit network assigned to that namespace

        Network Name Enter the network name.
        DNS Servers

        Enter the IP addresses of the DNS servers that you use with your environment, if any.

        For example, 10.142.7.1.

        When you enter IP address of the DNS server, a static route is added on each control plane VM. This indicates that the traffic to the DNS servers goes through the workload network.

        If the DNS servers that you specify are shared between the management network and workload network, the DNS lookups on the control plane VMs are routed through the workload network after initial setup.

        NTP Servers Enter the address of the NTP server that you use with your environment if any.
      • Static. Manually configure Workload Network settings
        Option Description
        Internal Network for Kubernetes Services Enter a CIDR notation that determines the range of IP addresses for Tanzu Kubernetes clusters and services that run inside the clusters.
        Port Group Select the port group that will serve as the Primary Workload Network to the Supervisor

        The primary network handles the traffic for the Kubernetes control plane VMs and Kubernetes workload traffic.

        Depending on your networking topology, you can later assign a different port group to serve as the network to each namespace. This way, you can provide layer 2 isolation between the namespaces in the Supervisor. Namespaces that do not have a different port group assigned as their network use the primary network. Tanzu Kubernetes clusters use only the network that is assigned to the namespace where they are deployed or they use the primary network if there is no explicit network assigned to that namespace

        Network Name Enter the network name.
        IP Address Ranges Enter an IP range for allocating IP address of Kubernetes control plane VMs and workloads.

        This address range connects the Supervisor nodes and, in the case of a single Workload Network, also connects the Tanzu Kubernetes cluster nodes. This IP range must not overlap with the load balancer VIP range when using the Default configuration for HAProxy.

        Subnet Mask Enter the subnet mask IP address.
        Gateway Enter the gateway for the primary network.
        NTP Servers Enter the address of the NTP server that you use with your environment if any.
        DNS Servers Enter the IP addresses of the DNS servers that you use with your environment, if any.

        For example, 10.142.7.1.

    2. Click Next.
  13. In the Review and Confirm page, scroll up and review all the settings that you configured so far and set advanced settings for the Supervisor deployment.
    Option Description
    Supervisor Control Plane Size Select the select the sizing for the control plane VMs. The size of the control plane VMs determines the amount of workloads that you can run on the Supervisor. You can select from:
    • Tiny - 2 CPUs, 8 GB Memory, 32 GB Storage
    • Small - 4 CPUs, 16 GB Memory, 32 GB Storage
    • Medium - 8 CPUs, 16 GB Memory, 32 GB Storage
    • Large - 16 CPUs, 32 GB Memory, 32 GB Storage
    Note: Once you select a control plane size, you can only scale up. You cannot scale down to smaller size.
    API Server DNS Names Optionally, enter the FQDNs that will be used to access the Supervisor control plane, instead of using the Supervisor control plane IP address. The FQDNs that you enter will be embedded into an automatically-generated certificate. By using FQDNs for the Supervisor, you can omit specifying an IP sand in the load balancer certificate.
    Export Configuration Export a JSON file containing the values of the Supervisor configuration that you have entered.

    You can later modify and import the file if you want to redeploy the Supervisor or if you want to deploy a new Supervisor with similar configuration.

    Exporting the Supervisor configuration can save you time from entering all the configuration values in this wizard anew in case of Supervisor redeployment.

  14. Click Finish when ready with reviewing the settings.
    The deployment of the Supervisor initiates the creation and configuration of the control plane VMs and other components.

What to do next

Once you complete the wizard for enabling a Supervisor, you can track the activation process and observe potential issues that require troubleshooting. In the Config Status column, click view next to the status of the Supervisor.
Figure 1. Supervisor activation view

The Supervisor enablement view displays the conditions in the enablement process and the respective status.

For the deployment process to complete, the Supervisor must reach the desired state, which means that all conditions are reached. When a Supervisor is successfully enabled, its status changes from Configuring to Running. While the Supervisor is in Configuring state, reaching each of the conditions is retried continuously. If a condition is not reached, the operation is retried until success. Because of this reason, the number of conditions that are reached could change back and forth, for example 10 out of 16 conditions reached and then 4 out of 16 conditions reached and so on. In very rare cases the status could change to Error, if there are errors that prevent reaching the desired state.

For more information on the deployment errors and how to troubleshoot them, see Resolving Errors Health Statuses on Supervisor Cluster During Initial Configuration Or Upgrade.

In case you want to attempt redeploying the Supervisor by altering the configuration values that you have entered in the wizard, checkout Deploy a Supervisor by Importing a JSON Configuration File.