You can use vSphere with Kubernetes to transform vSphere to a platform for running Kubernetes workloads natively on the hypervisor layer. When enabled on a vSphere cluster, vSphere with Kubernetes provides the capability to run Kubernetes workloads directly on ESXi hosts and to create upstream Kubernetes clusters within dedicated resource pools.
The Challenges of Today's Application Stack
Today's distributed systems are constructed of multiple microservices usually running a large number of Kubernetes pods and VMs. A typical stack that is not based on vSphere with Kubernetes consists of an underlying virtual environment, with Kubernetes infrastructure that is deployed inside VMs, and respectively Kubernetes pods also running in these VMs. Three separate roles operate each part of the stack, which are application developers, Kubernetes cluster administrators, and vSphere administrators.
- As an application developer, you can only run Kubernetes pods. You do not have visibility over the entire stack that is running hundreds of applications.
- As a Kubernetes cluster administrator, you only have control over the Kubernetes infrastructure, without the tools to manage or monitor the virtual environment and resolve any resource-related and other problems.
- As a vSphere administrator, you have full control over the underlying virtual environment, but you do not have visibility over the Kubernetes infrastructure, the placement of the different Kubernetes objects in the virtual environment, and how they consume resources.
Operations on the full stack can be challenging, because they require communication between all three roles. The lack of integration between the different layers of the stack can also introduce challenges. For example, the Kubernetes scheduler does not have visibility over the vCenter Server inventory and it cannot place pods intelligently.
How Does vSphere with Kubernetes Help?
vSphere with Kubernetes creates a Kubernetes control plane directly on the hypervisor layer. As a vSphere administrator, you enable existing vSphere clusters for vSphere with Kubernetes, thus creating a Kubernetes layer within the ESXi hosts that are part of the cluster. A cluster enabled with vSphere with Kubernetes is called a Supervisor Cluster.
- As a vSphere administrator, you can create namespaces on the Supervisor Cluster, called Supervisor Namespaces, and configure them with dedicated memory, CPU, and storage. You provideSupervisor Namespaces to DevOps engineers.
- As a DevOps engineer, you can run workloads consisting of Kubernetes containers on the same platform with shared resource pools within aSupervisor Namespace. In vSphere with Kubernetes, containers run inside a special type of VM called vSphere Pod.
- As a DevOps engineer, you can create and manage multiple Kubernetes clusters inside a namespace and manage their lifecycle by using the Tanzu Kubernetes Grid Service. Kubernetes clusters created by using the Tanzu Kubernetes Grid Service are called Tanzu Kubernetes clusters.
- As a vSphere administrator, you can manage and monitor vSphere Pods and Tanzu Kubernetes clusters by using the same tools as with regular VMs.
- As a vSphere administrator, you have full visibility over vSphere Pods and Tanzu Kubernetes clusters running within different namespaces, their placement in the environment, and how they consume resources.
Having Kubernetes running on the hypervisor layer also eases the collaboration between vSphere administrators and DevOps teams, because both roles are working with the same objects.
What Is a Workload?
In vSphere with Kubernetes, workloads are applications deployed in one of the following ways:
- Applications that consist of containers running inside vSphere Pods, regular VMs, or both.
- Tanzu Kubernetes clusters deployed by using the VMware Tanzu™ Kubernetes Grid™ Service.
- Applications that run inside the Tanzu Kubernetes clusters that are deployed by using the VMware Tanzu™ Kubernetes Grid™ Service.