vSphere Container Storage Plug-in supports static provisioning for block volumes in native Kubernetes clusters.

Static provisioning is a feature native to Kubernetes. With static provisioning, cluster administrators can make existing storage devices available to a cluster. If you have an existing persistent storage device in your vCenter Server, you can use static provisioning to make the storage instance available to your cluster.

As a cluster administrator, you must know the details of the storage device, its supported configurations, and mount options.

To make existing storage available to a cluster user, you must manually create the storage device, a PeristentVolume, and a PersistentVolumeClaim. Because the PV and the storage device already exist, you do not need to specify a storage class name in the PVC specification. You can use different ways to create a static PV and PVC binding. For example, label matching, volume size matching, and so on.
Note: Creating multiple PVs for the same volume backed by a vSphere virtual disk (First Class Disk) in the Kubernetes cluster is not supported.

The common use cases of static volume provisioning include the following.

Use an existing storage device
You have provisioned a persistent storage, First Class Disk (FCD), directly in your vCenter Server, and want to use this FCD in your cluster.
Make retained data available to the cluster
You have provisioned a volume with a reclaimPolicy:retain parameter in the storage class by using dynamic provisioning. You have removed the PVC, but the PV, the physical storage in vCenter Server, and the data still exists. You want to access the retained data from an application in your cluster.
Share persistent storage across namespaces in the same cluster
You have provisioned a PV in a namespace of your cluster. You want to use the same storage instance for an application pod that is deployed to a different namespace in your cluster.
Share persistent storage across clusters in the same zone
You have provisioned a PV for your cluster. To share the same persistent storage instance with other clusters in the same zone, you must manually create the PV and matching PVC in the other cluster.
Note: Sharing persistent storage across clusters is available only if the cluster and the storage instance are located in the same zone.
Statically provision a Single Access (RWO) Volume backed by vSphere Virtual Disk (First Class Disk)
This procedure provides instructions to provision a persistent volume statically on a Vanilla Kubernetes cluster. Make sure to mention pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com in the PV annotation.
Note: Do not specify the key storage.kubernetes.io/csiProvisionerIdentity in csi.volumeAttributes in PV specification. This key indicates dynamically provisioned PVs.


  1. Define a PVC and a PV.
    Use the following YAML file as an example.
    apiVersion: v1
          kind: PersistentVolume
            name: static-pv-name
              pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
              fcd-id: 0c75d40e-7576-4fe7-8aaa-a92946e2805d # This label is used as selector to bind with volume claim.
                                                           # This can we any unique key-value to identify PV.
              storage: 2Gi
              - ReadWriteOnce
            persistentVolumeReclaimPolicy: Delete
              driver: "csi.vsphere.vmware.com"
                type: "vSphere CNS Block Volume"
              "volumeHandle": "0c75d40e-7576-4fe7-8aaa-a92946e2805d" # First Class Disk (Improved Virtual Disk) ID
          kind: PersistentVolumeClaim
          apiVersion: v1
            name: static-pvc-name
              - ReadWriteOnce
                storage: 2Gi
                fcd-id: 0c75d40e-7576-4fe7-8aaa-a92946e2805d # This label is used as selector to find matching PV with specified key and value.
            storageClassName: ""
  2. Import the PV and PVC into a native Kubernetes cluster.
      kubectl create -f static.yaml
  3. Verify that the PVC you imported has been created and the PersistentVolume is attached to it.
    $ kubectl describe pvc static-pvc-name
          Name:          static-pvc-name
          Namespace:     default
          Status:        Bound
          Volume:        static-pv-name
          Labels:        <none>
          Annotations:   pv.kubernetes.io/bind-completed: yes
                         pv.kubernetes.io/bound-by-controller: yes
          Finalizers:    [kubernetes.io/pvc-protection]
          Capacity:      2Gi
          Access Modes:  RWO
          VolumeMode:    Filesystem
          Mounted By:    <none>
          Events:        <none>
    If the operation is successful, the Status section displays Bound and the Volume field is populated.
  4. Verify that the PV was successfully attached to the PVC.
    $ kubectl describe pv static-pv-name
          Name:            static-pv-name
          Labels:          fcd-id=0c75d40e-7576-4fe7-8aaa-a92946e2805d
          Annotations:     pv.kubernetes.io/bound-by-controller: yes
                           pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
          Finalizers:      [kubernetes.io/pv-protection]
          Status:          Bound
          Claim:           default/static-pvc-name
          Reclaim Policy:  Delete
          Access Modes:    RWO
          VolumeMode:      Filesystem
          Capacity:        2Gi
          Node Affinity:   <none>
              Type:              CSI (a Container Storage Interface (CSI) volume source)
              Driver:            csi.vsphere.vmware.com
              VolumeHandle:      0c75d40e-7576-4fe7-8aaa-a92946e2805d
              ReadOnly:          false
              VolumeAttributes:      type=vSphere CNS Block Volume
          Events:                <none>
    If the operation is successful, the PV shows up in the output. You can also see that the VolumeHandle key is populated. Status shows Bound. You can also see that Claim is set to static-pvc-name.