TKG clusters, as some other components and workloads that run in Supervisor namespaces, require persistent storage.
Storage Policies for TKG Cluster
To provide persistent storage resources to the TKG clusters, a vSphere administrator configures storage policies that describe different storage requirements. The administrator then adds the storage policies to the namespace where the TKG cluster is deployed. Storage policies visible to the namespace determine which datastores the namespace can access and use for persistent storage. They dictate how the cluster nodes and workloads are placed in the vSphere storage environment.
Based on the storage policies assigned to the namespace, vSphere IaaS control plane creates matching Kubernetes storage classes that automatically appear in the namespace. They are also propagated to the TKG cluster on this namespace.
In the TKG cluster, the storage classes appear in two editions, one with the Immediate and another with the WaitForFirstConsumer binding mode. The edition the DevOps team choose depends on their requirements.
For more information about storage classes in TKG clusters, see Using Storage Classes for Persistent Volumes.
How TKG Clusters Integrate with vSphere Storage
To integrate with the Supervisor and vSphere storage, TKG clusters use Paravirtual CSI (pvCSI).
The pvCSI is the version of the vSphere CNS-CSI driver modified for TKG clusters. The pvCSI resides in the TKG cluster and is responsible for all storage related requests originating from the TKG cluster. The requests are delivered to the CNS-CSI, which then propagates them to CNS in vCenter Server. As a result, the pvCSI does not have direct communication with the CNS component, but instead relies on the CNS-CSI for any storage provisioning operations. Unlike the CNS-CSI, the pvCSI does not require infrastructure credentials. It is configured with a service account in the namespace.
To learn about Supervisor components used to integrate with vSphere storage, see Persistent Storage for Workloads.
How a Persistent Volume is Created
The following illustrates how different components interact when a DevOps engineer performs a storage related operation within the TKG cluster, for example creates a persistent volume claim (PVC).
The DevOps engineer creates a PVC using the command line on the TKG cluster. This action generates a matching PVC on the Supervisor and triggers the CNS-CSI. The CNS-CSI invokes the CNS create volume API.
After successful creation of a volume, the operation propagates back through the Supervisor to the TKG cluster. As a result of this propagation, users can see the persistent volume and the persistent volume claim in the bound state in the Supervisor. And they also see the persistent volume and the persistent volume claim in the bound state in the TKG cluster.
Functionality Supported by pvCSI
The pvCSI component that runs in the TKG cluster supports a number of vSphere and Kubernetes storage features.
Supported Functionality | pvCSI with TKG Cluster |
---|---|
CNS Support in the vSphere Client | Yes |
Enhanced Object Health in the vSphere Client | Yes (vSAN only) |
Dynamic Block Persistent Volume (ReadWriteOnce Access Mode) | Yes |
Dynamic File Persistent Volume (ReadWriteMany Access Mode) | Yes (with vSAN File Services) |
vSphere Datastore | VMFS/NFS/vSAN/vVols |
Static Persistent Volume | Yes |
Encryption | No |
Offline Volume Expansion | Yes |
Online Volume Expansion | Yes |
Volume Topology and Zones | Yes |
Kubernetes Multiple Control Plane Instances | Yes |
WaitForFirstConsumer | Yes |
VolumeHealth | Yes |
Storage vMotion with Persistent Volumes | No |
Volume Snapshots | Yes For information, see Creating Snapshots in a TKG Service Cluster. |