You can use vSphere IaaS control plane to transform vSphere to a platform for running Kubernetes workloads natively on the hypervisor layer. When enabled on vSphere clusters, vSphere IaaS control plane provides the capability to run Kubernetes workloads directly on ESXi hosts and to create upstream Kubernetes clusters within dedicated namespaces called vSphere Namespace.

The Challenges of Today's Application Stack

Today's distributed systems are constructed of multiple microservices usually running a large number of Kubernetes pods and VMs. A typical stack that is not based on vSphere IaaS control plane consists of an underlying virtual environment, with Kubernetes infrastructure that is deployed inside VMs, and respectively Kubernetes pods also running in these VMs. Three separate roles operate each part of the stack, which are application developers, Kubernetes cluster administrators, and vSphere administrators.

Figure 1. Today's Application Stack
A stack with 3 layers - Kubernetes Workload, Kubernetes Cluster, Virtual Environment. 3 roles manage them - Developer, Cluster Admin, vSphere Admin.
The different roles do not have visibility or control over each other's environments:
  • As an application developer, you can run Kubernetes pods, and deploy and manage Kubernetes based applications. You do not have visibility over the entire stack that is running hundreds of applications.
  • As a DevOps engineer or cluster admin, you only have control over the Kubernetes infrastructure, without the tools to manage or monitor the virtual environment and resolve any resource-related and other problems.
  • As a vSphere administrator, you have full control over the underlying virtual environment, but you do not have visibility over the Kubernetes infrastructure, the placement of the different Kubernetes objects in the virtual environment, and how they consume resources.

Operations on the full stack can be challenging, because they require communication between all three roles. The lack of integration between the different layers of the stack can also introduce challenges. For example, the Kubernetes scheduler does not have visibility over the vCenter Server inventory and it cannot place pods intelligently.

How Does vSphere IaaS Control Plane Help?

vSphere IaaS control plane creates a Kubernetes control plane directly on the hypervisor layer. As a vSphere administrator, you activate existing vSphere clusters for vSphere IaaS control plane, thus creating a Kubernetes layer within the ESXi hosts that are part of the clusters. vSphere clusters activated for vSphere IaaS control plane are called Supervisors.

Figure 2. vSphere IaaS Control Plane

IaaS Platform stack with workloads is at the top, Virtual Environment stack is at the bottom. Two roles manage them, Developer and vSphere Admin.
Having a Kubernetes control plane on the hypervisor layer enables the following capabilities in vSphere:
  • As a vSphere administrator, you can create namespaces on the Supervisor, called vSphere Namespaces, and configure them with specified amount of memory, CPU, and storage. You provide vSphere Namespaces to DevOps engineers.
  • As a DevOps engineer, you can run Kubernetes workloads on the same platform with shared resource pools within a vSphere Namespace. You can deploy and manage multiple upstream Kubernetes clusters created by using TKG. You can also deploy Kubernetes containers directly on the Supervisor inside a special type of VM called vSphere Pod. You can also deploy regular VMs.
  • As a vSphere administrator, you can manage and monitor vSphere Pods, VMs, and TKG clusters by using the vSphere Client.
  • As a vSphere administrator, you have full visibility over vSphere Pods, VMs, and TKG clusters running within different namespaces, their placement in the environment, and how they consume resources.

Having Kubernetes running on the hypervisor layer also eases the collaboration between vSphere administrators and DevOps teams, because both roles are working with the same objects.

What Is a Workload?

In vSphere IaaS control plane, workloads are applications deployed in one of the following ways:

  • Applications that consist of containers running inside vSphere Pods.
  • Workloads provisioned through the VM service.
  • TKG clusters deployed by using TKG.
  • Applications that run inside the TKG clusters.

What are vSphere Zones?

vSphere Zones provide high-availability against cluster-level failure to workloads deployed on vSphere IaaS control plane. As a vSphere administrator, you create vSphere zones in the vSphere Client and then map vSphere clusters to zones. You use the zones to deploy Supervisors on your vSphere IaaS control plane environment.

You can deploy a Supervisor on three vSphere Zones for cluster-level high-availability. Or you can deploy a Supervisor on one a single vSphere cluster, which will create a vSphere Zone automatically and map it to the cluster, or you can use a cluster that is already mapped to a zone. For more information, see Supervisor Architecture and Supervisor Zonal and Cluster Deployments.

What Is a TKG Cluster?

A TKG cluster is a full distribution of Kubernetes that is built, signed, and supported by VMware. You can provision and operate upstream TKG clusters on Supervisors by using TKG.

A TKG cluster that is provisioned by TKG has the following characteristics:
TKG cluster characteristics, starting from left to right - opinionated, well-integrated, production-ready, fully-supported, managed by Kubernetes
  • Opinionated Installation of Kubernetes. TKG provides well-thought-out defaults that are optimized for vSphere to provision TKG clusters. By using TKG, you can reduce the amount of time and effort that you typically spend for deploying and running an enterprise-grade Kubernetes cluster
  • Integrated with the vSphere Infrastructure. A TKG cluster is integrated with the vSphere SDDC stack, including storage, networking, and authentication. In addition, a TKG cluster is built on a Supervisor that maps to vSphere clusters. Because of the tight integration, running a TKG cluster is a unified product experience.
  • Production Ready. TKG provisions production-ready TKG clusters. You can run production workloads without the need to perform any additional configuration. In addition, you can ensure availability and allow for rolling Kubernetes software upgrades and run different versions of Kubernetes in separate clusters.
  • High-availability for Kubernetes workloads. TKG clusters deployed on a three vSphere zone Supervisor are protected against failure on vSphere cluster level. The workload and control plane nodes of TKG clusters are distributed across all three vSphere zones, which makes the Kubernetes workloads running inside them highly-available. TKG clusters running on one-zone Supervisor are protected against failure at ESXi host level, through vSphere HA.
  • Fully Supported by VMware. TKG clusters use open source Linux-based from VMware, are deployed on vSphere infrastructure, and run on ESXi hosts. If you experience problems with any layer of the stack, from the hypervisor to the Kubernetes cluster, VMware is the only vendor you need to contact.
  • Managed by Kubernetes. TKG clusters are built on top of the Supervisor, which is itself a Kubernetes cluster. A TKG cluster is defined in the vSphere Namespace using a custom resource. You provision TKG clusters in a self-service way using familiar kubectl commands and the Tanzu CLI. There is consistency across the toolchain, whether you are provisioning a cluster or deploying workloads, you use the same commands, familiar YAML, and common workflows.

For more information see Tanzu Kubernetes Grid Architecture and Components and Using TKG Service with vSphere IaaS Control Plane .

What Is a vSphere Pod?

vSphere IaaS control plane introduces a construct that is called vSphere Pod, which is the equivalent of a Kubernetes pod. A vSphere Pod is a VM with a small footprint that runs one or more Linux containers. Each vSphere Pod is sized precisely for the workload that it accommodates and has explicit resource reservations for that workload. It allocates the exact amount of storage, memory, and CPU resources required for the workload to run. vSphere Pods are only supported with Supervisors that are configured with NSX as the networking stack.

Figure 3. vSphere Pods
ESXi host containing two vSphere Pod boxes. Each vSphere Pod has containers running inside of it, a Linux kernel, memory, CPU, and storage resources.
vSphere Pods are objects in vCenter Server and enable the following capabilities for workloads:
  • Strong isolation. A vSphere Pod is isolated in the same manner as a virtual machine. Each vSphere Pod has its own unique Linux kernel that is based on the kernel used in Photon OS. Rather than many containers sharing a kernel, as in a bare metal configuration, in a vSphere Pod, each container has a unique Linux kernel
  • Resource Management. vSphere DRS handles the placement of vSphere Pods on the Supervisor.
  • High performance. vSphere Pods get the same level of resource isolation as VMs, eliminating noisy neighbor problems while maintaining the fast start-up time and low overhead of containers.
  • Diagnostics. As a vSphere administrator you can use all the monitoring and introspection tools that are available with vSphere on workloads.
vSphere Pods are Open Container Initiative (OCI) compatible and can run containers from any operating system as long as these containers are also OCI compatible.
Figure 4. vSphere Pod Networking and Storage
vSphere Pod with containers, container engine, and pod engine inside. The pod connects to container image, storage, NSX switch, spherelet, and hostd.

vSphere Pods use three types of storage depending on the objects that are stored, that are ephemeral VMDKs, persistent volume VMDKs, and containers image VMDKs. As a vSphere administrator, you configure storage policies for placement of container image cache and ephemeral VMDKs on the Supervisor level. On a vSphere Namespace level, you configure storage policies for placement of persistent volumes. See Persistent Storage for Workloads for details about the storage requirements and concepts with vSphere IaaS control plane.

For networking, vSphere Pods and the VMs of the TKG clusters use the topology provided by NSX. For details, see Supervisor Networking.

Spherelet is an additional process that is created on each host. It is a kubelet that is ported natively to ESXi and allows the ESXi host to become part of the Kubernetes cluster.

For information about using vSphere Pods on Supervisors, see Deploying Workloads to vSphere Pods in the vSphere IaaS Control Plane Services and Workloads documentation.

Using Virtual Machines in vSphere IaaS control plane

vSphere IaaS control plane offers a VM Service functionality that enables DevOps engineers to deploy and run VMs, in addition to containers, in a common, shared Kubernetes environment. Both, containers and VMs, share the same vSphere Namespace resources and can be managed through a single vSphere IaaS control plane interface.

The VM Service addresses the needs of DevOps teams that use Kubernetes, but have existing VM-based workloads that cannot be easily containerized. It also helps users reduce the overhead of managing a non-Kubernetes platform alongside a container platform. When running containers and VMs on a Kubernetes platform, DevOps teams can consolidate their workload footprint to just one platform.

Note: In addition to stand-alone VMs, the VM Service manages the VMs that make up TKG clusters. For information about the clusters, see the Using TKG Service with vSphere IaaS Control Plane documentation.
The illustration shows VM Service as a Supervisor component that manages stand-alone VMs and VMs that make up Tanzu Kubernetes Grid clusters.

Each VM deployed through the VM Service functions as a complete machine running all the components, including its own operating system, on top of the vSphere IaaS control plane infrastructure. The VM has access to networking and storage that a Supervisor provides, and is managed using the standard Kubernetes kubectl command. The VM runs as a fully isolated system that is immune to interference from other VMs or workloads in the Kubernetes environment.

When to Use Virtual Machines on a Kubernetes Platform?

Generally, a decision to run workloads in a container or in a VM depends on your business needs and goals. Among the reasons to use VMs appear the following:

  • Your applications cannot be containerized.
  • Applications are designed for a custom kernel or custom operating system.
  • Applications are better suited to running in a VM.
  • You want to have a consistent Kubernetes experience and avoid overhead. Rather than running separate sets of infrastructure for your non-Kubernetes and container platforms, you can consolidate these stacks and manage them with a familiar kubectl command.

For information about deploying and managing stand-alone virtual machines in a Supervisor, see Deploying and Managing Virtual Machines in the vSphere IaaS Control Plane Services and Workloads documentation.

Supervisor Services in vSphere IaaS Control Plane

Supervisor Services are vSphere certified Kubernetes operators that deliver Infrastructure-as-a-Service components and tightly-integrated Independent Software Vendor services to developers. You can install and manage Supervisor Services on the vSphere IaaS control plane environment so that to make them available for use with Kubernetes workloads. When Supervisor Services are installed on Supervisors, DevOps engineers can use the service APIs to create instances on Supervisors in their user namespaces. These instances can then be consumed in vSphere Pods and TKG clusters.

Learn more about the supported Supervisor Services and how to download their service YAML files at http://vmware.com/go/supervisor-service.

For information about how to use Supervisor Services, see Managing Supervisor Services in the vSphere IaaS Control Plane Services and Workloads documentation.