This topic describes how to provision static and dynamic PersistentVolumes (PVs) to run stateful apps using VMware Tanzu Kubernetes Grid Integrated Edition (TKGI).

For static PV provisioning, the PersistentVolumeClaim (PVC) does not need to reference a StorageClass. For dynamic PV provisioning, you must specify a StorageClass and define the PVC using a reference to that StorageClass.

For more information about storage management in Kubernetes, see Persistent Volumes in the Kubernetes Concepts documentation.

For more information about the supported vSphere topologies for PV storage, see PersistentVolume Storage Options on vSphere.

Provision a Static PV

To provision a static PV, you manually create a Virtual Machine Disk (VMDK) file to use as a storage backend for the PV. When the PV is created, Kubernetes knows which volume instance is ready for use. When a PVC or volumeClaimTemplate is requested, Kubernetes chooses an available PV in the system and allocates it to the Deployment or StatefulSets workload.

Provision a Static PV for a Deployment Workload

To provision a static PV for a Deployment workload, the procedure is as follows:

Note: The examples in this section use the vSphere volume plugin. Refer to the Kubernetes documentation for information about volume plugins for other cloud providers.

  1. ssh into an ESXi host in your vCenter cluster that has access to the datastore where you will host the static PV.

  2. Create VMDK files, replacing DATASTORE with your datastore directory name:

    [root@ESXi-1:~] cd /vmfs
    [root@ESXi-1:/vmfs] cd volumes/
    [root@ESXi-1:/vmfs/volumes] cd DATASTORE/
    [root@ESXi-1:/vmfs/volumes/7e6c0ca3-8c4873ed] cd kubevols/
    [root@ESXi-1:/vmfs/volumes/7e6c0ca3-8c4873ed/kubevols] vmkfstools -c 2G redis-master.vmdk
    
  3. Define a PV using a YAML manifest file that contains a reference to the VMDK file. For example, on vSphere, create a file named redis-master-pv.yaml with the following contents:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: redis-master-pv
    spec:
      capacity:
    	storage: 2Gi
      accessModes:
    	- ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      vsphereVolume:
    	volumePath: "[NFS-LAB-DATASTORE] kubevols/redis-master"
    	fsType: ext4
    
  4. Define a PVC using a YAML manifest file. For example, create a file named redis-master-claim.yaml with the following contents:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: redis-master-claim
    spec:
      accessModes:
    	- ReadWriteOnce
      resources:
    	requests:
      	  storage: 2Gi
    
  5. Define a deployment using a YAML manifest file that references the PVC. For example, create a file named redis-master.yaml with the following contents:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: redis-master
    …
    spec:
      template:
        spec:
          volumes:
      	  - name: redis-master-data
          persistentVolumeClaim:
            claimName: redis-master-claim
    

Provision a Static PV for a StatefulSets Workload

To provision a static PV for a StatefulSets workload with three replicas, the procedure is as follows:

Note: The examples in this section use the vSphere volume plugin. Refer to the Kubernetes documentation for information about volume plugins for other cloud providers.

  1. Create VMDK files, replacing DATASTORE with your datastore directory name:

    [root@ESXi-1:~] cd /vmfs
    [root@ESXi-1:/vmfs] cd volumes/
    [root@ESXi-1:/vmfs/volumes] cd DATASTORE/
    [root@ESXi-1:/vmfs/volumes/7e6c0ca3-8c4873ed] cd kubevols/
    [root@ESXi-1:/vmfs/volumes/7e6c0ca3-8c4873ed/kubevols] vmkfstools -c 10G mysql-pv-1.vmdk
    [root@ESXi-1:/vmfs/volumes/7e6c0ca3-8c4873ed/kubevols] vmkfstools -c 10G mysql-pv-2.vmdk
    [root@ESXi-1:/vmfs/volumes/7e6c0ca3-8c4873ed/kubevols] vmkfstools -c 10G mysql-pv-3.vmdk
    
  2. Define a PV for the first replica using a YAML manifest file that contains a reference to the VMDK file. For example, on vSphere, create a file named mysql-pv-1.yaml with the following contents:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: mysql-pv-1
    spec:
      capacity:
    	storage: 10Gi
      accessModes:
    	- ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      vsphereVolume:
        volumePath: "[NFS-LAB-DATASTORE] kubevols/mysql-pv-1"
    	fsType: ext4
    
  3. Define a PV for the second replica using a YAML manifest file that contains a reference to the VMDK file. For example, on vSphere, create a file named mysql-pv-2.yaml with the following contents:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: mysql-pv-2
    spec:
      capacity:
    	storage: 10Gi
      accessModes:
    	- ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      vsphereVolume:
        volumePath: "[NFS-LAB-DATASTORE] kubevols/mysql-pv-2"
    	fsType: ext4
    
  4. Define a PV for the third replica using a YAML manifest file that contains a reference to the VMDK file. For example, on vSphere, create a file named mysql-pv-3.yaml with the following contents:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: mysql-pv-3
    spec:
      capacity:
    	storage: 10Gi
      accessModes:
    	- ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      vsphereVolume:
        volumePath: "[NFS-LAB-DATASTORE] kubevols/mysql-pv-3"
    	fsType: ext4
    
  5. Define a StatefulSets object using a YAML manifest file. For example, create a file named mysql-statefulsets.yaml with the following contents:

    piVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: mysql
    spec:
      selector:
        matchLabels:
          app: mysql
      serviceName: mysql
      replicas: 3
    ...
    volumeClaimTemplates:
      - metadata:
      	name: data
    	spec:
      	accessModes: ["ReadWriteOnce"]
      	resources:
          requests:
          	storage: 10Gi
    

    Note: In previous steps you created a total of three PVs. The spec.replicas: 3 field defines three replicas. Each replica is attached to one PV.

    Note: In the volumeClaimTemplates section, you must specify the required storage size for each replica. Do not to refer to a StorageClass.

Provision a Dynamic PV

Dynamic PV provisioning gives developers the freedom to provision storage when they need it without manual intervention from a Kubernetes cluster administrator. To enable dynamic PV provisioning, the Kubernetes cluster administrator defines one or more StorageClasses.

For dynamic PV provisioning, the procedure is to define and create a PVC that automatically triggers the creation of the PV and its backend VMDK file. When the PV is created, Kubernetes knows which volume instance is available for use. When a PVC or volumeClaimTemplate is requested, Kubernetes chooses an available PV and allocates it to the Deployment or StatefulSets workload.

Tanzu Kubernetes Grid Integrated Edition supports dynamic PV provisioning by providing StorageClasses for all supported cloud providers, as well as an example PVC.

Note: For dynamic PVs on vSphere, you must create or map the VMDK file for the StorageClass on a shared file system datastore. This shared file system datastore must be accessible to each vSphere cluster where Kubernetes cluster nodes run. For more information, see PersistentVolume Storage Options on vSphere.

Provision a Dynamic PV for Deployment Workloads

Note: The examples in this section use the vSphere provisioner. Refer to the Kubernetes documentation for information about provisioners for other cloud providers.

For the Deployment workload with dynamic PV provisioning, the procedure is as follows:

  1. Define a StorageClass using a YAML manifest file. For example, on vSphere, create a file named redis-sc.yaml with the following contents:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: thin-disk
    provisioner: kubernetes.io/vsphere-volume
    
  2. Define a PVC using a YAML manifest file that references the StorageClass. For example, create a file named redis-master-claim.yaml with the following contents:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: redis-master-claim
      annotations:
    	volume.beta.kubernetes.io/storage-class: thin-disk
    spec:
      accessModes:
    	- ReadWriteOnce
      resources:
    	requests:
      	storage: 2Gi
    

    Note: When you deploy the PVC on vSphere, the vSphere Cloud Provider plugin automatically creates the PV and associated VMDK file.

  3. Define a Deployment using a YAML manifest file that references the PVC. For example, create a file named redis-master.yaml with the following contents:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: redis-master
    …
    spec:
      template:
        spec:
          volumes:
      	  - name: redis-master-data
        	persistentVolumeClaim:
            claimName: redis-master-claim
    

Provision a Dynamic PV for StatefulSets Workloads

Note: The examples in this section use the vSphere provisioner. Refer to the Kubernetes documentation for information about provisioners for other cloud providers.

To provision a static PV for a StatefulSets workload with three replicas, the procedure is as follows:

  1. Define a StorageClass using a YAML manifest file. For example, on vSphere, create a file named mysql-sc.yaml with the following contents:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: my-storage-class
    provisioner: kubernetes.io/vsphere-volume
    
  2. Define a StatefulSets object using a YAML manifest file that references the StorageClass. For example, create a file named mysql-statefulsets.yaml with the following contents:

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: mysql
    spec:
    ...
    volumeClaimTemplates:
      - metadata:
      	name: data
    	spec:
      	accessModes: ["ReadWriteOnce"]
          storageClassName: "my-storage-class"
      	resources:
        	requests:
          	storage: 10Gi
    

    Note: In the volumeClaimTemplates, specify the required storage size for each replica. Unlike static provisioning, you must explicitly refer to the desired StorageClass when you use dynamic PV provisioning.

Specify a Default StorageClass

If you have or anticipate having more than one StorageClass for use with dynamic PVs for a Kubernetes cluster, you might want to designate a particular StorageClass as the default. This allows you to manage a storage volume without setting up specialized StorageClasses across the cluster.

If necessary, a developer can change the default StorageClass in the PVC definition. See the Kubernetes documentation for more information.

To specify a StorageClass as the default for a Kubernetes cluster, use the annotation storageclass.kubernetes.io/is-default-class: "true".

For example:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: thin-disk
  annotations:
	storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/vsphere-volume

Note: The above example uses the vSphere provisioner. Refer to the Kubernetes documentation for information about provisioners for other cloud providers.

Provision Dynamic PVs for Use with Tanzu Kubernetes Grid Integrated Edition

Perform the steps in this section to register one or more StorageClasses and define a PVC that can be applied to newly-created pods.

  1. Download the StorageClass spec for your cloud provider by running the command for your cloud provider:

    • AWS: wget https://raw.githubusercontent.com/cloudfoundry-incubator/kubo-ci/master/specs/storage-class-aws.yml
    • Azure:
      • For Azure disk storage: wget https://raw.githubusercontent.com/cloudfoundry-incubator/kubo-ci/master/specs/storage-class-azure.yml
      • For Azure file storage: wget https://raw.githubusercontent.com/cloudfoundry-incubator/kubo-ci/master/specs/storage-class-azure-file.yml
    • GCP: wget https://raw.githubusercontent.com/cloudfoundry-incubator/kubo-ci/master/specs/storage-class-gcp.yml
    • vSphere: wget https://raw.githubusercontent.com/cloudfoundry-incubator/kubo-ci/master/specs/storage-class-vsphere.yml After downloading the vSphere StorageClass spec, replace the contents of the file with the following YAML to create the correct StorageClass for vSphere:

      kind: StorageClass
      apiVersion: storage.k8s.io/v1
      metadata:
        name: thin
        annotations:
          storageclass.kubernetes.io/is-default-class: "true"
      provisioner: kubernetes.io/vsphere-volume
      
  2. Apply the spec by running the following command:

    kubectl create -f STORAGE-CLASS-SPEC.yml
    

    Where STORAGE-CLASS-SPEC is the name of the file that you downloaded in the previous step.

    For example:

    $ kubectl create -f storage-class-gcp.yml
    
  3. Download the example PVC by running the following command:

    wget https://raw.githubusercontent.com/cloudfoundry-incubator/kubo-ci/master/specs/persistent-volume-claim.yml
    
  4. Apply the PVC by running the following command:

    kubectl create -f persistent-volume-claim.yml
    
  5. Confirm that you applied the PVC by running the following command:

    kubectl get pvc -o wide
    
  6. To use the dynamic PV, create a pod that uses the PVC. For an example, see the pv-guestbook.yml configuration file in the kubo-ci repository in GitHub.

check-circle-line exclamation-circle-line close-line
Scroll to top icon