As a vSphere administrator, you can configure a vSphere cluster as a Supervisor Cluster that uses the NSX-T Data Center networking stack to provide connectivity to Kubernetes workloads.

Prerequisites

  • Verify that your environment meets the system requirements for configuring a vSphere cluster as a Supervisor Cluster. For information about requirements, see System Requirements and Topologies for Setting Up a Supervisor Cluster with NSX-T Data Center.
  • Verify that NSX-T Data Center is installed and configured. For information about installing and configuring NSX-T Data Center, see Install and Configure NSX-T Data Center for vSphere with Tanzu.
  • Create storage policies for the placement of control plane VMs, pod ephemeral disks, and container images.
  • Configure shared storage for the cluster. Shared storage is required for vSphere DRS, HA, and storing persistent volumes of containers.
  • Verify that DRS and HA is enabled on the vSphere cluster, and DRS is in the fully automated mode.
  • Verify that you have the Modify cluster-wide configuration privilege on the cluster.

Procedure

  1. From the vSphere Client home menu, select Workload Management.
  2. Click Get Started.
  3. Select the vCenter Server system that you want to configure.
  4. Select the NSX networking stack.
  5. Click Next.
  6. Select Select a Cluster > Datacenter.
  7. Select a cluster from the list of compatible clusters and click Next.
  8. From the Control Plane Size page, select the select the sizing for the control plane VMs.
    The size of the control plane VMs determines the amount of workloads that you can run on the Supervisor Cluster.
    Refer to the VMware Configuration Maximums site for guidance.
  9. Click Next.
  10. In Network settings, configure networking settings for the control plane and worker nodes.
    1. In the Management Network pane, configure the following management traffic settings:
      Option Description
      Network Select a network that you configured for the management traffic.

      For example, DPortGroup-MGMT.

      Starting Control Plane IP Enter an IP address that determines the starting point for reserving consecutive IP addresses for the Kubernetes control plane VMs as follows:
      • An IP address for each of the Kubernetes control plane VMs.
      • A floating IP address for one of the Kubernetes control plane VMs to serve as an interface to the management network. The control plane VM that has the floating IP address assigned acts as a leading VM for all three Kubernetes control plane VMs.
      • An IP address to serve as a buffer in case a Kubernetes control plane VM fails and a new control plane VM is being brought up to replace it.

      For example, 10.197.79.152.

      Subnet Mask Enter the subnet mask for the management network.

      For example, 255.255.255.0.

        Enter the gateway IP address.

      For example, 10.197.79.253.

      DNS Servers Enter the addresses of the DNS servers that you use in your environment. If the vCenter Server system is registered with an FQDN, you must enter the IP addresses of the DNS servers that you use with the vSphere environment so that the FQDN is resolvable in the Supervisor Cluster.

      For example, 10.142.7.1.

      NTP Servers Enter the addresses of the NTP servers that you use in your environment, if any.
      DNS Search Domains Enter domain names that DNS searches inside the Kubernetes control plane nodes, such as corp.local, so that the DNS server can resolve them.
    2. In the Workload Network pane, configure settings for the networks for namespaces.
      The namespace network settings provide connectivity to vSphere Pods and namespaces running in the Supervisor Cluster.
      Option Description
      vSphere Distributed Switch Select the vSphere Distributed Switch that handles overlay networking for the Supervisor Cluster.

      For example, select DSwitch.

      Edge Cluster Select the NSX Edge cluster that has the tier-0 gateway that you want to use for namespace networking.

      For example, select EDGE-CLUSTER.

      API Server Endpoint FQDN Optionally, enter the FQDN of the API server endpoint.
      DNS Server Enter the IP addresses of the DNS servers that you use with your environment, if any.

      For example, 10.142.7.1.

      Pod CIDRs Enter a CIDR annotation to determine the IP range for vSphere Native Pods. You can use the default value.
      Services CIDRs Enter a CIDR annotation to determine the IP range for Kubernetes services. You can use the default value.
      Ingress CIDRs Enter a CIDR annotation that determines the ingress IP range for the Kubernetes services. This range is used for services of type load balancer and ingress.
      Egress CIDRs Enter a CIDR annotation that determines the egress IP for Kubernetes services. Only one egress IP address is assigned for each namespace in the Supervisor Cluster. The egress IP is the IP address that the vSphere Pods in the particular namespace use to communicate outside of NSX-T Data Center.
  11. Click Next.
  12. In the Storage settings, configure storage for the Supervisor Cluster.
    The storage policy you select for each of the following objects ensures that the object is placed on the datastore referenced in the storage policy. You can use the same or different storage policies for the objects.
    Option Description
    Control Plane Node Select the storage policy for placement of the control plane VMs.
    Pod Ephemeral Disks Select the storage policy for placement of the vSphere Pods.
    Container Image Cache Select the storage policy for placement of the cache of container images.
  13. In the Ready to Complete section, review the settings and Finish.
    The cluster is enabled with vSphere with Tanzu and you can create Supervisor Namespaces to provide to DevOps engineers. Kubernetes control plane nodes are created on the hosts that are part of the cluster and the Spherelet process.

What to do next

Create and configure a Supervisor Namespace on the Supervisor Cluster. See Create and Configure a Supervisor Namespace