A vSphere Pod and a pod that runs in aTanzu Kubernetes cluster require different storage characteristics for different types of storage objects that are available in Kubernetes.
vSphere with Kubernetes supports three types of storage: ephemeral virtual disks, container image virtual disks, and persistent volume virtual disks. A vSphere Pod and a pod that runs in a Tanzu Kubernetes cluster can mount any of the three types of virtual disks.
Ephemeral Virtual Disks
A pod requires ephemeral storage to store such Kubernetes objects as logs, emptyDir volumes, and ConfigMaps during its operations. This ephemeral, or transient, storage lasts as long as the pod continues to exist. Ephemeral data persists across container restarts, but once the pod reaches the end of its life, the ephemeral virtual disk disappears.
Each pod has one ephemeral virtual disk. A vSphere administrator uses a storage policy to define the datastore location for all ephemeral virtual disks when configuring storage for the Supervisor Cluster. See Enable vSphere with Kubernetes on a Cluster with NSX-T Data Center.
Container Image Virtual Disks
Containers inside the pod use images that contain the software to be run. The pod mounts images used by its containers as image virtual disks. When the pod completes its life cycle, the image virtual disks are detached from the pod.
Image Service, an ESXi component, is responsible for pulling container images from the image registry and transforming them into virtual disks to run inside the pod.
ESXi can cache images that are downloaded for the containers running in the pod. Subsequent pods that use the same image pull it from the local cache rather than the external container registry.
The vSphere administrator specifies the datastore location for the image cache when configuring storage for the Supervisor Cluster. See Enable vSphere with Kubernetes on a Cluster with NSX-T Data Center.
For information about working with the container images, see Using the Registry Service to Provide a Private Image Registry.
Persistent Storage Virtual Disks
To provision persistent storage for Kubernetes workloads, vSphere with Kubernetes integrates with Cloud Native Storage (CNS), a vCenter Server component that manages persistent volumes. Persistent storage is used by vSphere Pods and pods inside Tanzu Kubernetes clusters.
For more information and for specifics on how persistent storage is used by the Tanzu Kubernetes clusters, see How vSphere with Kubernetes Integrates with vSphere Storage and Running Tanzu Kubernetes Clusters in vSphere with Kubernetes.
To understand how vSphere with Kubernetes works with persistent storage, be familiar with the following essential concepts.
- Persistent Volume
Certain Kubernetes workloads require persistent storage to store the data independent of the pod. To provide persistent storage, Kubernetes uses persistent volumes that can retain their state and data. They continue to exist even when the pod is deleted or reconfigured. In the
vSphere with Kubernetes environment, the persistent volume objects are backed by the First Class Disks on a datastore.
vSphere with Kubernetes supports dynamic provisioning of volumes in ReadWriteOnce mode, the volumes that can be mounted by a single node.
- First Class Disk
vSphere with Kubernetes uses the First Class Disk (FCD) type of virtual disks to back persistent volumes. First Class Disk, also known as Improved Virtual Disk, is a named virtual disk not associated with a VM.
First Class Disks are identified by UUID. This UUID is globally unique and is the primary identifier for the FCD. The UUID remains valid even if its FCD is relocated or snapshotted.
- Persistent Volume Claim
- DevOps engineers create persistent volume claims to request persistent storage resources. The request dynamically provisions a persistent volume object and a matching virtual disk. In the vSphere Client, the persistent volume claim manifests as an FCD virtual disk that can be monitored by vSphere administrators.
- The claim is bound to the persistent volume. The workloads can use the claim to mount the persistent volumes and access storage.
- When the DevOps engineers delete the claim, the corresponding persistent volume object and the provisioned virtual disk are also deleted.
- Storage Class
- Kubernetes uses storage classes to describe requirements for storage backing the persistent volumes. DevOps engineers can include a specific storage class in their persistent volume claim specification to request the type of storage the class describes.
Persistent Storage Workflow
The workflow for provisioning persistent storage in vSphere with Kubernetes includes the following sequential actions.
|vSphere administrators deliver persistent storage resources to the DevOps team.||vSphere administrators create VM storage policies that describe different storage requirements and classes of services. They can then assign the storage policies to a Supervisor Namespace.|
|vSphere with Kubernetes creates storage classes that match the storage policies assigned to the Supervisor Namespace.||The storage classes automatically appear in the Kubernetes environment, and can be used by the DevOps team. If a vSphere administrator assigns multiple storage policies to the Supervisor Namespace, a separate storage class is created for each storage policy.
If you use the Tanzu Kubernetes Grid Service to provision Tanzu Kubernetes clusters, each Tanzu Kubernetes cluster inherits storage classes from the Supervisor Namespace in which the cluster is provisioned.
|DevOps engineers use the storage classes to request persistent storage resources for a workload.||The request comes in a form of a persistent volume claim that references a specific storage class.|
|vSphere with Kubernetes creates a persistent volume object and a matching persistent virtual disk for a pod.||vSphere with Kubernetes places the virtual disk into the datastore that meets the requirements specified in the original storage policy and its matching storage class. After the pod starts, the virtual disk is mounted into the pod.|
|vSphere administrators monitor persistent volumes in the vSphere with Kubernetes environment.||Using the vSphere Client, vSphere administrators monitor the persistent volumes and their backing virtual disks. They can also monitor storage compliance and health statuses of the persistent volumes.|
Watch this video to learn about the persistent storage in vSphere with Kubernetes.
Storage Management Tasks of a vSphere Administrator
- Perform lifecycle operations for the VM storage policies.
Before enabling a Supervisor Cluster and configuring namespaces, create storage policies for all three types of storage that your Kubernetes environment needs: ephemeral storage, container image storage, and persistent storage. The storage policies are based on the storage requirements communicated to you by the DevOps engineers. See Create Storage Policies for vSphere with Kubernetes.Note: Do not delete the storage policy from vCenter Server or a Supervisor Namespace when a persistent volume claim with the corresponding storage class is running in the namespace. This recommendation also applies to Tanzu Kubernetes clusters. Do not delete the storage policy if a Tanzu Kubernetes cluster pod is using the storage class for its storage.
- Provide storage resources to the DevOps engineers by assigning the storage policies to the Supervisor Cluster and namespace, and by setting storage limits. For information about changing storage policy assignments, see Change Storage Settings on the Supervisor Cluster and Change Storage Settings on a Namespace. For information on setting limits, see Configure Limitations on Kubernetes Objects on a Namespace.
- Monitor Kubernetes objects and their storage policy compliance in the vSphere Client. See Monitor Persistent Volumes.
Storage Management Tasks of a DevOps Engineer
- Manage storage classes. See Display Storage Classes in a Supervisor Namespace or Tanzu Kubernetes Cluster.
- Deploy and manage stateful applications. See Deploy a Stateful Application.
- Perform lifecycle operations for persistent volumes. Tanzu Kubernetes Persistent Volume Claim Example.