As a vSphere administrator, you can enable the Workload Management platform on a vSphere cluster by configuring the vSphere networking stack to provide connectivity to workloads. A Supervisor Cluster that is configured with vSphere networking supports the deployment of Tanzu Kubernetes clusters created by using the Tanzu Kubernetes Grid Service. It does not support running vSphere Pods.

Prerequisites

Select the topology that you want to implement for the Supervisor Cluster and verify that your vSphere environment meets the system requirements for supporting that topology. See System Requirements and Topologies for Setting Up a Supervisor Cluster with vSphere Networking.

  • Verify that DRS and HA is enabled on the vSphere cluster, and DRS is in the fully automated mode.
  • Configure shared storage for the cluster. Shared storage is required for vSphere DRS, HA, and storing persistent volumes of containers.
  • Create a storage policy for the placement of Kubernetes control plane VMs. See Create Storage Policies for vSphere with Tanzu.
  • Create a subscribed content library on the vCenter Server system to accommodate the VM image that is used for creating nodes of Tanzu Kubernetes clusters. The library will contain the latest distributions of Kubernetes. See Create a Subscribed Content Library for Tanzu Kubernetes Clusters.
  • Add all hosts from the cluster to a vSphere Distributed Switch and create port groups for Workload Networks. See Create a vSphere Distributed Switch for a Supervisor Cluster that Uses the vSphere Networking Stack.
  • Configure an HAProxy load balancer instance that is routable to the vSphere Distributed Switch that is connected to the hosts from the vSphere cluster.
  • Verify that you have the Modify cluster-wide configuration privilege on the cluster.

Procedure

  1. From the home menu, select Workload Management.
  2. Select a licensing option for the Supervisor Cluster.
    • If you have a valid Tanzu Edition license, click Add License to add the license key to the license inventory of vSphere.
    • If you do not have a Tanzu edition license yet, fill in the contact details so that you can receive communication from VMware later on and click Get Started.
    The evaluation period of a Supervisor Cluster lasts for 60 days. Within that period, you must assign a valid Tanzu Edition license to the cluster. If you added a Tanzu Edition license key, you can assign that key within the 60 day evaluation period once you complete the Supervisor Cluster setup.
  3. One the Workload Management screen, click Get Started again.
  4. Select a vCenter Server system, select vCenter Server Network, and click Next.
  5. Select a cluster from the list of compatible clusters.
  6. From the Control Plane Size page, select the t-shirt size for the Kubernetes control plane VMs that will be created on each host from the cluster.
    The amount of resources that you allocate to control plane VMs determines the amount of Kubernetes workloads that you can run on the Supervisor Cluster.
  7. On the Load Balancer screen, enter the settings for the HAProxy instance.
    Option Description
    Name A user-friendly name for the load balancer.
    Type The load balancer type.
    Data plane API Address(es) The IP address and port of the HAProxy Data Plane API. This is the component that controls the HAProxy server and runs inside the HAProxy VM.
    User name The user name that is configured with the HAProxy OVA file. You use this name to authenticate with the HAProxy Data Plane API.
    Password The password for the user name.
    IP Address Ranges for Virtual Servers An IP ranges from which HAProxy allocates virtual IPs. HAProxy reserves the IP range that you set for virtual IPs and it can't be used for other connections. Make sure that the IP range that you configure is located on a separate subnet.
    Server Certificate Authority

    SSH to the HAProxy VM as root and copy /etc/haproxy/ca.crt to the Server Certificate Authority.

    This is the certificate in PEM format that has signed or is a trusted root of the server certificate that the Data Plane API presents. Do not use escape lines in the \n format.
  8. On the Management Network screen, configure the parameters for the network that will be used for Kubernetes control plane VMs.
    Option Description
    Network Select a network that has a VMkernel adapter configured for the management traffic.
    Starting Control IP address Enter an IP address that determines the starting point for reserving five consecutive IP addresses for the Kubernetes control plane VMs as follows:
    • An IP address for each of the Kubernetes control plane VMs.
    • A floating IP address for one of the Kubernetes control plane VMs to serve as an interface to the management network. The control plane VM that has the floating IP address assigned acts as a leading VM for all three Kubernetes control plane VMs.
    • An IP address to serve as a buffer in case a Kubernetes control plane VM fails and a new control plane VM is being brought up to replace it.
    Subnet Mask Enter the subnet mask for the management network.
    DNS Servers Enter the addresses of the DNS servers that you use in your environment. If the vCenter Server system is registered with an FQDN, you must enter the IP addresses of the DNS servers that you use with the vSphere environment so that the FQDN is resolvable in the Supervisor Cluster.
    DNS Search Domains Enter domain names that DNS searches inside the Kubernetes control plane nodes, such as corp.local, so that the DNS server can resolve them.
    NTP Enter the addresses of the NTP servers that you use in your environment, if any.
  9. In the Workload Network page, enter the settings for the network that will handle the networking traffic for Kubernetes workloads running on the Supervisor Cluster.
    1. In the IP addresses for Services field, enter cider range of IP addresses for Tanzu Kubernetes clusters and services that run inside the clusters.
    2. In the Workload Network pane, click Add and enter the parameters for the network.
      Option Description
      Name The name of the vSphere Distributed Switch that is associated with hosts in the cluster.
      Port Group Select the port group that will serve as the primary network to the Supervisor Cluster. The primary network handles the traffic for the Kubernetes control plane VMs and Kubernetes workload traffic. Depending on your networking topology, you can later assign a different port group to serve as the network to each namespace. This way, you can provide layer 2 isolation between the namespaces in the Supervisor Cluster. Namespaces that do not have a different port group assigned as their network use the primary network. Tanzu Kubernetes clusters use only the network that is assigned to the namespace where they are deployed or they use the primary network if there is no explicit network assigned to that namespace.
      Gateway Enter the gateway for the primary network.
      Subnet Mask IP Enter the subnet mask IP address.
      IP Address Ranges Enter an IP range for allocating IP address of Kubernetes control plane VMs and workloads.
      Note: You must use a unique IP address range for each Workload Network. Do not configure the same IP address range for multiple networks.
    3. Add more Workload Networks according to the topology that you implement for the Supervisor Cluster.
  10. On the Tanzu Kubernetes Grid page, click Add and select the subscribed content library that contains the VM images for deploying the nodes of Tanzu Kubernetes clusters.
  11. Review your settings and click Finish.

Results

A task runs on vCenter Server for turning the cluster to a Supervisor Cluster. Once the task completes, three Kubernetes control plane VMs are created on the hosts that are part of the cluster.

What to do next

Create and configure your first namespaces on the Supervisor Cluster.