When vSphere with Tanzu is activated on a vSphere cluster, a Kubernetes control plane is instantiated by using Photon OS virtual machines. This layer contains multiple objects that activate the capability to run Kubernetes workloads natively in the ESXi hosts.

Deployment Model for vSphere with Tanzu for Developer Ready Infrastructure for VMware Cloud Foundation

You determine the use of the different services, the sizing of those resources, and how they are deployed and managed based on the design objectives for the Developer Ready Infrastructure for VMware Cloud Foundation validated solution.

vSphere Storage Based Policy Management Configuration

You must configure a datastore with the activation requirements before activating a Supervisor Cluster. The Supervisor Cluster configuration requires the use of vSphere Storage Policy Based Management (SPBM) policies for control plane nodes, ephemeral disks, and image cache. These policies correlate to Kubernetes storage policies that can be assigned to vSphere Namespaces. These policies are consumed in a Supervisor Cluster or a Tanzu Kubernetes cluster, deployed by using Tanzu Kubernetes Grid Service in the Supervisor Cluster.

Table 1. Design Decisions on vSphere Storage Policy Based Management for Developer Ready Infrastructure for VMware Cloud Foundation

Decision ID

Design Decision

Design Justification

Design Implication

DRI-TZU-CFG-001

Create a vSphere tag and tag category, and apply the vSphere tag to the vSAN datastore in the shared edge and workload vSphere cluster in the VI workload domain.

Supervisor Cluster activation requires the use of vSphere Storage Based Policy Management (SPBM).

To assign the vSAN datastore to the Supervisor Cluster, you need to create a vSphere tag and tag category to create an SPBM rule.

This must be done manually or via PowerCLI.

DRI-TZU-CFG-002

Create a vSphere Storage Policy Based Management (SPBM) policy that specifies the vSphere tag you created for the Supervisor Cluster.

When you create the SPBM policy and define the vSphere tag for the Supervisor Cluster, you can then assign that SPBM policy during Supervisor Cluster activation.

This must be done manually or via PowerCLI.

Supervisor Cluster

A vSphere cluster that is activated for vSphere with Tanzu is called a Supervisor Cluster. After a Supervisor Cluster is instantiated, a vSphere administrator can create vSphere Namespaces. Developers can run modern applications that consist of containers running inside vSphere Pods and create Tanzu Kubernetes clusters when upstream Kubernetes compliant clusters are required.

The Supervisor Cluster uses ESXi hosts as worker nodes. This is achieved by using an additional process, Spherelet, that is created on each host. Spherelet is a kubelet that is ported natively to the ESXi host and allows the host to become part of the Kubernetes cluster.
Table 2. Design Decisions on the Supervisor Cluster for Developer Ready Infrastructure for VMware Cloud Foundation

Decision ID

Design Decision

Design Justification

Design Implication

DRI-TZU-CFG-003

Activate vSphere with Tanzu on the shared edge and workload vSphere cluster in the VI workload domain.

The Supervisor Cluster is required to run Kubernetes workloads natively and to deploy Tanzu Kubernetes clusters natively using Tanzu Kubernetes Grid Service.

Ensure the shared edge and workload vSphere cluster is sized to support the Supervisor Cluster control plane, any additional integrated management workloads, and any tenant workloads.

DRI-TZU-CFG-004

Deploy the Supervisor Cluster with small size control plane nodes.

Deploying the control plane nodes as small size appliances gives you the ability to run up to 2000 pods within your Supervisor Cluster.

If your pod count is higher than 2000 for the Supervisor Cluster, you must deploy control plane nodes that can handle that level of scale.

You must account for the size of the control plane nodes.

DRI-TZU-CFG-005

Use NSX-T Data Center as provider of the software-defined networking for the Supervisor Cluster.

You can deploy a Supervisor Cluster either by using NSX-T Data Center or vSphere Networking .

VMware Cloud Foundation uses NSX-T Data Center for software-defined networking across the SDDC. Deviating for vSphere with Tanzu would increase the operational overhead.

None.

DRI-TZU-CFG-006

Deploy the NSX Edge cluster with large size nodes.

Large size NSX Edge nodes are the smallest size supported to activate a Supervisor Cluster.

You must account for the size of the NSX Edge nodes.

Registry Service

You use a private image registry on a Supervisor Cluster through the Registry Service. The Registry Service is a locally managed deployment of Harbor registry embedded in the Supervisor Cluster. A Supervisor Cluster can host up to a single instance of the embedded Harbor registry. All vSphere Namespaces and Tanzu Kubernetes clusters running within a Supervisor Cluster appear as projects in the private registry.

Table 3. Design Decisions on the vSphere Registry Service for Developer Ready Infrastructure for VMware Cloud Foundation

Decision ID

Design Decision

Design Justification

Design Implication

DRI-TZU-CFG-007

Activate the Registry Service for the Supervisor Cluster.

Activating the Registry Service provides an integrated Harbor image registry to the Supervisor Cluster that provides multiple functionalities:

  • Federated authentication for end users

  • Native life cycle management of the image registry

  • Native health monitoring of the image registry

None

Tanzu Kubernetes Cluster

A Tanzu Kubernetes cluster is a full distribution of the open-source Kubernetes container orchestration software that is packaged, signed, and supported by VMware. Tanzu Kubernetes clusters are provisioned by the VMware Tanzu™ Kubernetes Grid™ Service in the Supervisor Cluster. The cluster consists of at least one control plane node and at least one worker node. The Tanzu Kubernetes Grid Service deploys the clusters as Photon OS appliances on top of the Supervisor Cluster. You determine the deployment parameters (the size and the number of control plane and worker nodes, Kubernetes distribution version, etc.) to be deployed by using YAML definition through kubectl.

Table 4. Design Decisions on the Tanzu Kubernetes Cluster for for Developer Ready Infrastructure for VMware Cloud Foundation

Decision ID

Design Decision

Design Justification

Design Implication

DRI-TZU-CFG-008

Deploy a Tanzu Kubernetes Cluster in the Supervisor Cluster.

For applications that require upstream Kubernetes compliance, a Tanzu Kubernetes Cluster is required.

None

DRI-TZU-CFG-009

Configure a subscribed content library for Tanzu Kubernetes Cluster use in the shared edge and workload vSphere cluster.

To deploy a Tanzu Kubernetes Cluster on a Supervisor Cluster, you must configure a content library in the shared edge and workload vSphere cluster to pull required images from VMware.

You must manually configure the content library.

DRI-TZU-CFG-010

Use Antrea as the container network interface (CNI) for your Tanzu Kubernetes Clusters.

Antrea is the default CNI for Tanzu Kubernetes Clusters.

New Tanzu Kubernetes Clusters are deployed with Antrea as the CNI, unless you specify Calico.

Sizing Compute and Storage Resources for Developer Ready Infrastructure for VMware Cloud Foundation

When sizing the necessary resources for the solution, compute and storage requirements are key considerations.

You size the compute and storage requirements for the vSphere with Tanzu management workloads, Tanzu Kubernetes Cluster management workloads, NSX Edge nodes, and tenant workloads deployed in either the Supervisor Cluster or a Tanzu Kubernetes Grid cluster.

Table 5. Compute and Storage Resource Requirements for vSphere with Tanzu

Virtual Machine

Nodes

Total vCPUs

Total Memory

Total Storage

Supervisor Cluster control plane

(small nodes - up to 2000 pods per Supervisor cluster)

3

12

48 GB

200 GB

Registry Service

N/A

7

7 GB

200 GB

Tanzu Kubernetes Cluster control plane (small nodes)

3 (per cluster)

6

12 GB

48 GB

Tanzu Kubernetes Cluster worker nodes (small nodes)

3 (per cluster)

6

12 GB

48 GB

VMware NSX edge node (large nodes)

Minimum of 2

16

64 GB

400 GB

Table 6. Design Decisions on Sizing the Tanzu Kubernetes Cluster for Developer Ready Infrastructure for VMware Cloud Foundation

Decision ID

Design Decision

Design Justification

Design Implication

DRI-TZU-CFG-011

Deploy the Tanzu Kubernetes Cluster with a minimum of three control plane nodes.

Deploying three control plane nodes ensures the state of your Tanzu Kubernetes Cluster control plane stays healthy in the event of a node failure.

None.

DRI-TZU-CFG-012

Deploy the Tanzu Kubernetes Cluster with a minimum of three worker nodes.

Deploying three worker nodes provides a higher potential level of availability of your workloads deployed to the Tanzu Kubernetes Cluster.

You must configure your tenant workloads to use effectively the additional worker nodes in the Tanzu Kubernetes Cluster to provide high availability on an application-level.

DRI-TZU-CFG-013

Deploy the Tanzu Kubernetes Cluster with small-size nodes for both control plane and workers.

Deploying the Tanzu Kubernetes Cluster control plane and worker nodes as small-size appliances meets the scale requirements for most deployments.

If your scale requirements are higher, you must deploy appliances with the appropriate size or split the workload to multiple Tanzu Kubernetes Clusters.

The size of the Tanzu Kubernetes Cluster nodes impacts the scale of a given cluster. If you must add a node to a cluster, consider the use of larger nodes.