As a DevOps engineer, you can deploy and manage the life cycle of vSphere Pods within the resources boundaries of a vSphere Namespace that is running on a Supervisor.

Note: You can deploy vSphere Pods only on Supervisors that are configured with the NSX networking stack. You cannot deploy vSphere Pods on Supervisors configured with the VDS stack. Supervisor Services are supported on both Supervisors configured with NSX or VDS and they deploy vSphere Pods for their own use. However, you cannot deploy vSphere Pods for generic use on a Supervisor configured with VDS.

What Is a vSphere Pod?

vSphere IaaS control plane introduces a construct that is called vSphere Pod, which is the equivalent of a Kubernetes pod. A vSphere Pod is a VM with a small footprint that runs one or more Linux containers. Each vSphere Pod is sized precisely for the workload that it accommodates and has explicit resource reservations for that workload. It allocates the exact amount of storage, memory, and CPU resources required for the workload to run. vSphere Pods are only supported with Supervisors that are configured with NSX as the networking stack.

Figure 1. vSphere Pods
ESXi host containing two vSphere Pod boxes. Each vSphere Pod has containers running inside of it, a Linux kernel, memory, CPU, and storage resources.
vSphere Pods are objects in vCenter Server and enable the following capabilities for workloads:
  • Strong isolation. A vSphere Pod is isolated in the same manner as a virtual machine. Each vSphere Pod has its own unique Linux kernel that is based on the kernel used in Photon OS. Rather than many containers sharing a kernel, as in a bare metal configuration, in a vSphere Pod, each container has a unique Linux kernel
  • Resource Management. vSphere DRS handles the placement of vSphere Pods on the Supervisor.
  • High performance. vSphere Pods get the same level of resource isolation as VMs, eliminating noisy neighbor problems while maintaining the fast start-up time and low overhead of containers.
  • Diagnostics. As a vSphere administrator you can use all the monitoring and introspection tools that are available with vSphere on workloads.
vSphere Pods are Open Container Initiative (OCI) compatible and can run containers from any operating system as long as these containers are also OCI compatible.

Guidelines for Deploying vSphere Pods

Before deploying vSphere Pods, make sure that your environment meets the following requirements.
Figure 2. vSphere Pod Networking and Storage
vSphere Pod with containers, container engine, and pod engine inside. The pod connects to container image, storage, NSX switch, spherelet, and hostd.
Namespace
Your Supervisors must have a configured vSphere Namespace with edit or owner permissions. Only one-zone Supervisors with NSX networking support vSphere Pods.

To create a Supervisor, see Deploy a One-Zone Supervisor with NSX Networking.

For information about creating a namespace, see Create and Configure a vSphere Namespace on the Supervisor.

For information about assigning permissions, see Identity and Access Management.

Networking
For networking, vSphere Pods use the topology provided by NSX. For details, see Supervisor Networking.

Spherelet is an additional process that is created on each host. It is a kubelet that is ported natively to ESXi and allows the ESXi host to become part of the Kubernetes cluster.

Storage

vSphere Pods use three types of storage depending on the objects that are stored, that are ephemeral VMDKs, persistent volume VMDKs, and containers image VMDKs.

As a vSphere administrator, you configure storage policies for placement of container image cache and ephemeral VMDKs when you enable the Supervisor .

On a vSphere Namespace level, you configure storage policies for placement of persistent volumes. See Using Persistent Storage with Supervisor Workloads in vSphere IaaS Control Plane for details about the storage requirements and concepts with vSphere IaaS control plane.