As a vSphere administrator, you can configure a vSphere cluster as a Supervisor Cluster that uses the NSX networking stack to provide connectivity to Kubernetes workloads.

Prerequisites

Caution: Do not disable vSphere DRS after you configure the Supervisor Cluster. Having DRS enabled at all times is a mandatory prerequisite for running workloads on the Supervisor Cluster. Disabling DRS leads to breaking your Tanzu Kubernetes clusters.

Procedure

  1. From the vSphere Client home menu, select Workload Management.
  2. Click Get Started.
  3. Select the vCenter Server system that you want to configure.
  4. Select the NSX networking stack.
  5. Click Next.
  6. Select Select a Cluster > Datacenter.
  7. Select a cluster from the list of compatible clusters and click Next.
  8. From the Control Plane Size page, select the select the sizing for the control plane VMs.
    The size of the control plane VMs determines the amount of workloads that you can run on the Supervisor Cluster.
    Refer to the VMware Configuration Maximums site for guidance.
  9. Click Next.
  10. On the Management Network screen, configure the parameters for the network that will be used for Kubernetes control plane VMs.
    1. Select a Network Mode.
      • DHCP Network. In this mode, all the IP addresses for the management network, such as control plane VM IPs, DNS servers, DNS, search domains, and NTP server are acquired automatically from a DHCP.
      • Static. Manually enter all networking settings for the management network.
    2. Configure the settings for the management network.
      If you have selected the DHCP network mode, but you want to override the settings acquired from DHCP, click Additional Settings and enter new values. If you have selected the static network mode, fill in the values for the management network settings manually.
      Option Description
      Network Select a network that has a VMkernel adapter configured for the management traffic.
      Starting Control IP address Enter an IP address that determines the starting point for reserving five consecutive IP addresses for the Kubernetes control plane VMs as follows:
      • An IP address for each of the Kubernetes control plane VMs.
      • A floating IP address for one of the Kubernetes control plane VMs to serve as an interface to the management network. The control plane VM that has the floating IP address assigned acts as a leading VM for all three Kubernetes control plane VMs. The floating IP moves to the control plane node that is the etcd leader in this Kubernetes cluster, which is the Supervisor Cluster. This improves availability in the case of a network partition event.
      • An IP address to serve as a buffer in case a Kubernetes control plane VM fails and a new control plane VM is being brought up to replace it.
      Subnet Mask Only applicable for static IP configuration. Enter the subnet mask for the management network.

      For example, 255.255.255.0

      DNS Servers Enter the addresses of the DNS servers that you use in your environment. If the vCenter Server system is registered with an FQDN, you must enter the IP addresses of the DNS servers that you use with the vSphere environment so that the FQDN is resolvable in the Supervisor Cluster.
      DNS Search Domains Enter domain names that DNS searches inside the Kubernetes control plane nodes, such as corp.local, so that the DNS server can resolve them.
      NTP Enter the addresses of the NTP servers that you use in your environment, if any.
  11. In the Workload Network pane, configure settings for the networks for namespaces.
    The namespace network settings provide connectivity to vSphere Pods and namespaces running in the Supervisor Cluster. By default, the namespace will use cluster level network configuration.
    Option Description
    vSphere Distributed Switch Select the vSphere Distributed Switch that handles overlay networking for the Supervisor Cluster.

    For example, select DSwitch.

    DNS Server Enter the IP addresses of the DNS servers that you use with your environment, if any.

    For example, 10.142.7.1.

    API Server Endpoint FQDN Optionally, enter the FQDN of the API server endpoint.
    Edge Cluster Select the NSX Edge cluster that has the tier-0 gateway that you want to use for namespace networking.

    For example, select EDGE-CLUSTER.

    Tier-0 Gateway Select the tier-0 gateway to associate with the cluster tier-1 gateway.
    NAT Mode The NAT mode is selected by default.

    If you deselect the option, all the workloads such as the vSphere Pods, VMs, and Tanzu Kubernetes clusters Node IP addresses are directly accessible from outside the tier-0 gateway and you do not have to configure the egress CIDRs.

    Note: If you deselect NAT mode, File Volume storage is not supported.
    Namespace Network Enter one or more IP CIDRs to create subnets/segments and assign IP addresses to workloads.
    Namespace Subnet Prefix Enter the subnet prefix that specifies the size of the subnet reserved for namespaces segments. Default is 28.
    Services CIDRs Enter a CIDR annotation to determine the IP range for Kubernetes services. You can use the default value.
    Ingress CIDRs Enter a CIDR annotation that determines the ingress IP range for the Kubernetes services. This range is used for services of type load balancer and ingress.
    Egress CIDRs Enter a CIDR annotation that determines the egress IP for Kubernetes services. Only one egress IP address is assigned for each namespace in the Supervisor Cluster. The egress IP is the IP address that the vSphere Pods in the particular namespace use to communicate outside of NSX.
  12. Click Next.
  13. On the Storage page, configure storage and file volume support.
    1. Select storage policies for the Supervisor Cluster.
      The storage policy you select for each of the following objects ensures that the object is placed on the datastore referenced in the storage policy. You can use the same or different storage policies for the objects.
      Option Description
      Control Plane Node Select the storage policy for placement of the control plane VMs.
      Pod Ephemeral Disks Select the storage policy for placement of the vSphere Pods.
      Container Image Cache Select the storage policy for placement of the cache of container images.
    2. (Optional) Activate file volume support.
      This option is required if you plan to deploy ReadWriteMany persistent volumes on a cluster. See Creating ReadWriteMany Persistent Volumes in vSphere with Tanzu.
  14. In the Ready to Complete section, review the settings and Finish.
    The cluster is enabled with vSphere with Tanzu and you can create vSphere Namespaces to provide to DevOps engineers. Kubernetes control plane nodes are created on the hosts that are part of the cluster and the Spherelet process.

What to do next

Create and configure a vSphere Namespace on the Supervisor Cluster. See Create and Configure a vSphere Namespace