When vSphere with Tanzu is activated on a vSphere cluster, a Kubernetes control plane is instantiated by using Photon OS virtual machines. This layer contains multiple objects that activate the capability to run Kubernetes workloads natively in the ESXi hosts.
Deployment Model for vSphere with Tanzu for Developer Ready Infrastructure for VMware Cloud Foundation
You determine the use of the different services, the sizing of those resources, and how they are deployed and managed based on the design objectives for the Developer Ready Infrastructure for VMware Cloud Foundation validated solution.
vSphere Storage Based Policy Management Configuration
You must configure a datastore with the activation requirements before activating a Supervisor. The Supervisor configuration requires the use of vSphere Storage Policy Based Management (SPBM) policies for control plane nodes, ephemeral disks, and image cache. These policies correlate to Kubernetes storage policies that can be assigned to vSphere Namespaces. These policies are consumed in a Supervisor or a Tanzu Kubernetes cluster, deployed by using Tanzu Kubernetes Grid Service in the Supervisor.
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
DRI-TZU-CFG-001 |
Create a vSphere tag and tag category, and apply the vSphere tag to the vSAN datastore in the shared edge and workload vSphere cluster in the VI workload domain. |
Supervisor activation requires the use of vSphere Storage Based Policy Management (SPBM). To assign the vSAN datastore to the Supervisor, you need to create a vSphere tag and tag category to create an SPBM rule. |
This must be done manually or via PowerCLI. |
DRI-TZU-CFG-002 |
Create a vSphere Storage Policy Based Management (SPBM) policy that specifies the vSphere tag you created for the Supervisor. |
When you create the SPBM policy and define the vSphere tag for the Supervisor, you can then assign that SPBM policy during Supervisor activation. |
This must be done manually or via PowerCLI. |
Supervisor
A vSphere cluster that is activated for vSphere with Tanzu is called a Supervisor. After a Supervisor is instantiated, a vSphere administrator can create vSphere Namespaces. Developers can run modern applications that consist of containers running inside vSphere Pods and create Tanzu Kubernetes clusters when upstream Kubernetes compliant clusters are required.
The Supervisor uses ESXi hosts as worker nodes. This is achieved by using an additional process, Spherelet, that is created on each host. Spherelet is a kubelet that is ported natively to the ESXi host and allows the host to become part of the Kubernetes cluster.
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
DRI-TZU-CFG-003 |
Activate vSphere with Tanzu on the shared edge and workload vSphere cluster in the VI workload domain. |
The Supervisor is required to run Kubernetes workloads natively and to deploy Tanzu Kubernetes clusters natively using Tanzu Kubernetes Grid Service. |
Ensure the shared edge and workload vSphere cluster is sized to support the Supervisor control plane, any additional integrated management workloads, and any tenant workloads. |
DRI-TZU-CFG-004 |
Deploy the Supervisor with small size control plane nodes. |
Deploying the control plane nodes as small size appliances gives you the ability to run up to 2000 pods within your Supervisor. If your pod count is higher than 2000 for the Supervisor, you must deploy control plane nodes that can handle that level of scale. |
You must account for the size of the control plane nodes. |
DRI-TZU-CFG-005 |
Use NSX as provider of the software-defined networking for the Supervisor. |
You can deploy a Supervisor either by using NSX or vSphere Networking . VMware Cloud Foundation uses NSX for software-defined networking across the SDDC. Deviating for vSphere with Tanzu would increase the operational overhead. |
None. |
DRI-TZU-CFG-006 |
Deploy the NSX Edge cluster with large size nodes. |
Large size NSX Edge nodes are the smallest size supported to activate a Supervisor. |
You must account for the size of the NSX Edge nodes. |
DRI-TZU-CFG-007 |
Deploy a single-zone Supervisor. |
A three-zone Supervisor requires three disparate vSphere clusters. |
No change to existing design or procedures with single-zone Supervisor. |
Harbor Supervisor Service
To use Harbor with vSphere with Tanzu, you deploy it as a Supervisor Service. Before you install Harbor as a service, you must install Contour.
All Tanzu Kubernetes Grid clusters running on the host Supervisor trust the Harbor Registry running as a Supervisor Service by default. Tanzu Kubernetes Grid clusters running on Supervisors different from the Supervisor where Harbor is installed must have network connectivity to Harbor. These Tanzu Kubernetes Grid clusters must be able to resolve the Harbor FQDN and establish trust with the Harbor Registry.
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
DRI-TZU-CFG-008 |
Deploy Contour as an Ingress Supervisor Service. |
Harbor requires Contour on the target Supervisor to provide Ingress Service. The Ingress IP address provided by Contour must be resolved to the Harbor FQDN. |
None. |
DRI-TZU-CFG-009 |
Deploy the Harbor Registry as a Supervisor Service. |
Harbor as a Supervisor Service has replaced the integrated registry in previous vSphere versions. |
You must provide the following configuration:
|
Tanzu Kubernetes Cluster
A Tanzu Kubernetes cluster is a full distribution of the open-source Kubernetes container orchestration software that is packaged, signed, and supported by VMware. Tanzu Kubernetes clusters are provisioned by the VMware Tanzu™ Kubernetes Grid™ Service in the Supervisor. The cluster consists of at least one control plane node and at least one worker node. The Tanzu Kubernetes Grid Service deploys the clusters as Photon OS appliances on top of the Supervisor. You determine the deployment parameters (the size and the number of control plane and worker nodes, Kubernetes distribution version, etc.) to be deployed by using YAML definition through kubectl.
You can provide high-availability to Tanzu Kubernetes Grid clusters when they are deployed on a three vSphere Zone Supervisor. A vSphere Zone maps to a vSphere cluster, which means that when you deploy a Supervisor on three vSphere Zones, it utilizes the resources of all three underlying vSphere clusters. This protects your Kubernetes workloads running inside Tanzu Kubernetes Grid clusters against failure on a vSphere cluster level. On a single-zone deployment, high-availability for Tanzu Kubernetes Grid clusters is provided on an ESXi host level by vSphere HA.
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
DRI-TZU-CFG-010 |
Deploy a Tanzu Kubernetes Cluster in the Supervisor. |
For applications that require upstream Kubernetes compliance, a Tanzu Kubernetes Cluster is required. |
None |
DRI-TZU-CFG-011 |
For VMware Cloud Foundation 4.5.2 or earlier, configure a subscribed content library for Tanzu Kubernetes Cluster use in the shared edge and workload vSphere cluster. |
In VMware Cloud Foundation 4.5.2 or earlier, to deploy a Tanzu Kubernetes Cluster on a Supervisor, you must configure a content library in the shared edge and workload vSphere cluster to pull required images from VMware. This is done automatically as part of the Supervisor instantiation process in VMware Cloud Foundation 5.0 or later. |
You must manually configure the content library in VMware Cloud Foundation 4.5.2 or earlier. |
DRI-TZU-CFG-012 |
Use Antrea as the container network interface (CNI) for your Tanzu Kubernetes Clusters. |
Antrea is the default CNI for Tanzu Kubernetes Clusters. |
New Tanzu Kubernetes Clusters are deployed with Antrea as the CNI, unless you specify Calico. |
Sizing Compute and Storage Resources for Developer Ready Infrastructure for VMware Cloud Foundation
When sizing the necessary resources for the solution, compute and storage requirements are key considerations.
You size the compute and storage requirements for the vSphere with Tanzu management workloads, Tanzu Kubernetes Cluster management workloads, NSX Edge nodes, and tenant workloads deployed in either the Supervisor or a Tanzu Kubernetes Grid cluster.
Virtual Machine |
Nodes |
Total vCPUs |
Total Memory |
Total Storage |
---|---|---|---|---|
Supervisor control plane (small nodes - up to 2000 pods per Supervisor) |
3 |
12 |
48 GB |
200 GB |
Registry Service |
N/A |
7 |
7 GB |
200 GB |
Tanzu Kubernetes Cluster control plane (small nodes) |
3 (per cluster) |
6 |
12 GB |
48 GB |
Tanzu Kubernetes Cluster worker nodes (small nodes) |
3 (per cluster) |
6 |
12 GB |
48 GB |
VMware NSX edge node (large nodes) |
Minimum of 2 |
16 |
64 GB |
400 GB |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
DRI-TZU-CFG-013 |
Deploy the Tanzu Kubernetes Cluster with a minimum of three control plane nodes. |
Deploying three control plane nodes ensures the state of your Tanzu Kubernetes Cluster control plane stays healthy in the event of a node failure. |
None. |
DRI-TZU-CFG-014 |
Deploy the Tanzu Kubernetes Cluster with a minimum of three worker nodes. |
Deploying three worker nodes provides a higher potential level of availability of your workloads deployed to the Tanzu Kubernetes Cluster. |
You must configure your tenant workloads to use effectively the additional worker nodes in the Tanzu Kubernetes Cluster to provide high availability on an application-level. |
DRI-TZU-CFG-015 |
Deploy the Tanzu Kubernetes Cluster with small-size nodes for both control plane and workers. |
Deploying the Tanzu Kubernetes Cluster control plane and worker nodes as small-size appliances meets the scale requirements for most deployments. If your scale requirements are higher, you must deploy appliances with the appropriate size or split the workload to multiple Tanzu Kubernetes Clusters. |
The size of the Tanzu Kubernetes Cluster nodes impacts the scale of a given cluster. If you must add a node to a cluster, consider the use of larger nodes. |