This site will be decommissioned on January 30th 2025. After that date content will be available at techdocs.broadcom.com.

VMware vSphere Container Storage Plug-in 2.0 | 05 NOV 2021

Check for additions and updates to these release notes.

About 2.0 Release Notes

These Release Notes cover versions 2.0.0, 2.0.1, and 2.0.2 of VMware vSphere Container Storage Plug-in, previously called vSphere CSI Driver.

What's New

Version

What's New

Version 2.0.2

No new features are released in version 2.0.2. This is a patch release to fix a critical issue observed in version 2.0.1.

Version 2.0.1

No new features are released in version 2.0.1. This is a patch release to fix critical issues observed in version 2.0.0:

  • Fixed a backward compatibility issue with the vSphere 67 Update 3 release.

  • Fixed race between detach volume and delete volume caused by bug in the external-provisioner.

Version 2.0.0

  • Offline persistent volume expansion for block volumes.

  • ReadWriteMany volumes using vSAN file services.

  • Support for enabling leader election to ensure High Availability for vSphere Container Storage Plug-in.

  • Other changes include:

    • Support for enabling leader election to ensure High Availability for vSphere Container Storage Plug-in.

    • Enhanced driver to ensure volume operations are idempotent.

    • Enhanced driver logging and debuggability with contextual log tracing.

Deployment Files

Important:

To ensure proper functionality, do not update the internal-feature-states.csi.vsphere.vmware.com configmap available in the deployment YAML file. VMware does not recommend to activate or deactivate features in this configmap.

Version

File

Version 2.0.2

https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/release-2.0/manifests/v2.0.2

Version 2.0.1

https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/release-2.0/manifests/v2.0.1

Version 2.0.0

https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/release-2.0/manifests/v2.0.0

Kubernetes Releases

Version

Kubernetes Releases

Version 2.0.2

  • Minimum: 1.17

  • Maximum: 1.19

Version 2.0.1

  • Minimum: 1.17

  • Maximum: 1.19

Version 2.0.0

  • Minimum: 1.16

  • Maximum: 1.18

Sidecar Container Versions

Version

Sidecar Container Versions

Version 2.0.2

  • csi-provisioner - v2.0.0

  • csi-attacher - v2.0.0

  • csi-resizer - v0.3.0

  • livenessprob - v1.1.0

  • csi-node-driver-registrar - v1.2.0

Version 2.0.1

  • csi-provisioner - v2.0.0

  • csi-attacher - v2.0.0

  • csi-resizer - v0.3.0

  • livenessprob - v1.1.0

  • csi-node-driver-registrar - v1.2.0

Version 2.0.0

  • csi-provisioner - v1.4.0

  • csi-attacher - v2.0.0

  • csi-resizer - v0.3.0

  • livenessprob - v1.1.0

  • csi-node-driver-registrar - v1.2.0

Resolved Issues

Version 2.0.2

  • vSAN file share volumes become inaccessible after Node DaemonSet Pod of vSphere Container Storage Plug-in restarts.

Known Issues

  • Volume provisioning using the datastoreURL parameter in StorageClass does not work correctly when this datastoreURL points to a shared datastore mounted across datacenters

    If nodes span across multiple datacenters, the vSphere Container Storage plug-in fails to find shared datastores accessible to all nodes. This issue occurs because datastores shared across datacenters have different datastore MoRefs, but use the same datastore URL. Current logic in the vSphere Container Storage plug-in does not consider this fact.

    Workaround: Do not spread node VMs across multiple datacenters.

  • Persistent volume fails to be detached from a node

    This problem might occur when the Kubernetes node object or node VM on vCenter Server have been deleted without draining the Kubernetes node.

    Workaround:

    1. Detach the volume manually from the node VM associated with the deleted node object.

    2. Delete the VolumeAttachment object associated with the deleted node object.

      kubectl patch volumeattachments.storage.k8s.io csi-<uuid> -p '{"metadata":{"finalizers":[]}}' --type=merge 
      kubectl delete volumeattachments.storage.k8s.io csi-<uuid>

  • When a pod that uses a PVC is rescheduled to other node while the metadatasyncer is down, fullsync might fail with an error

    You might observe the following error: Duplicated entity for each entity type in one cluster is found.

    This issue might be due to CNS holding stale volume metadata.

    Workaround: This issue is fixed in version 2.2.0. Upgrade the plug-in to 2.2.0.

  • CreateVolume request fails with an error after you change the hostname or IP of vCenter Server in the vsphere-config-secret

    After you change the hostname or IP of vCenter Server in the vsphere-config-secret and then try to create a persistent volume claim, the action fails with the following error:

    failed to get Nodes from nodeManager with err virtual center wasn't found in registry

    You can observe this issue in the following releases of vSphere Container Storage Plug-in: v2.0.2, v2.1.2 , v2.2.2 , v2.3.0, and v2.4.0.

    Workaround: Restart the vsphere-csi-controller pod by running the following command:

    kubectl rollout restart deployment vsphere-csi-controller -n kube-system

  • Unused volumes not deleted during a full synchronization when vSphere Container Storage Plug-in version 2.0.0 is used with vSphere 6.7 Update 3

    Full synchronization in vSphere Container Storage Plug-in version 2.0.0 does not delete unused volumes in vSphere 6.7 Update 3.

    Workaround: This issue is resolved in v2.0.1. If you are using vSphere 6.7 Update 3, upgrade the driver to 2.0.1 release.

  • If a volume is deleted before it is detached from a Node VM, the volume does not get detached from the node

    The reasons that the volume gets into this state are the following:

    • vSphere Container Storage Plug-in is issuing a delete call before the volume is detached.

    • vCenter Server API in vSphere 6.7 Update 3 is not marking volume back as container volume when the delete call fails while volume is attached to the Node VM.

      This issue is fixed in vSphere 7.0.

    When the Pod and PVC are deleted together upon deletion of a namespace, we have a race to delete and detach volume. Due to this, the Pod remains in the terminating state and PV remains in the released state.

    Upstream issue was tracked at: https://github.com/kubernetes/kubernetes/issues/84226.

    Workaround:

    1. Delete the Pod with force: kubectl delete pods pod --grace-period=0 --force.

    2. Find VolumeAttachment for the volume that remained undeleted. Get Node from this VolumeAttachment.

    3. Manually detach the disk from the Node VM.

    4. Edit this VolumeAttachment and remove the finalizer. It will get deleted.

    5. Use govc to manually delete the FCD.

    6. Edit PV and remove the finalizer. It will get deleted.

    The issue is resolved in v2.0.1. If you use vSphere 6.7 Update 3, upgrade the driver to 2.0.1 release.

  • When the static persistent volume is re-created with the same PV name, volume is not getting registered as a container volume with vSphere

    As a result of this problem, attach or delete operations cannot be performed on this persistent volume.

    Workaround: Wait for one hour before re-creating the static persistent volume using the same name.

  • Metadata syncer container deletes the volume physically from the datastore when you delete a Persistent Volume with Bound status and reclaim policy Delete while StorageObjectInUseProtection is disabled on Kubernetes Cluster

    As a result, Persistent Volumes Claim goes in the lost status. The volume can not be recovered.

    Workaround: Do not disable StorageObjectInUseProtection and attempt to delete the Persistent Volume directly without deleting the PVC.

  • A deployment yaml uses hostPath volume in the vSphere Container Storage Plug-in deployment for unix domain socket path

    When the controller Pod does not have access to the file system on the Node VM, driver fails to create socket file and thus does not come up.

    Workaround: Use emptydir volume instead of hostPath volume.

  • Volume expansion might fail when it is called simultaneously with pod creation

    This issue occurs when you resize the PVC and create a pod using that PVC simultaneously. In this case, pod creation might be completed first using the PVC with original size. Volume expansion fails because online resize is not supported in vSphere 7.0 Update 1.

    Workaround: Wait for the PVC to reach the FileVolumeResizePending condition before attaching a pod to it.

  • Deleting a PV before deleting a PVC leaves orphan volume on the datastore

    Orphan volumes remain on the datastore. An administrator needs to delete those volumes manually using govc command.

    The upstream issue is tracked at: https://github.com/kubernetes-csi/external-provisioner/issues/546.

    Workaround: Do not attempt to delete a PV bound to a PVC. Only delete a PV if you know that the underlying volume in the storage system is gone.

    If you accidentally left orphan volumes on the datastore and know the volume handles or First Class Disk IDs of deleted PVs, the storage administrator can delete the volumes using the govc disk.rm volume-handle/FCD ID command .

  • When multiple PVCs and Pods with the same name are present on the cluster and the volume gets de-registered or lost from vCenter Server CNS database, Syncer does not re-register volume back

    As a result, the volume does not reappear on the CNS view of the vSphere Client. Attempts to detached and attached the volume to a newer node fail.

    Workaround: This issue is fixed in 2.1.0 release of vSphere Container Storage Plug-in. Upgrade the plug-in to version 2.1.0.

  • vSAN file share volumes become inaccessible after Node DaemonSet Pod of vSphere Container Storage Plug-in restarts

    Pod can not write/read data on vSAN file share volumes, ReadWriteMany and ReadOnlyMany vSphere Container Storage Plug-in volumes

    Workaround: Remount vSAN file services volumes used by the application pods at the same location from the Node VM guest OS directly. This issue has been resolved in v2.0.2.

  • user, ca-file , and VirtualCenter change in the vsphere-config-secret is not honored until the vSphere Container Storage Plug-in Pod restarts

    Volume lifecycle operations fail until the controller pod restarts.

    Workaround: Restart vsphere-csi-controller deployment pod.

  • A restart of vCenter Server might leave Kubernetes persistent volume claims in pending state

    Persistent volume claims that are being created at the time vCenter Server is restarting might remain in pending state for one hour.

    After vSphere Container Storage Plug-in clears up pending cached tasks, new tasks to create volumes will be issued to the vCenter Server system, and then persistent volume claims can go into Bound State.

    Restarting vCenter Server while volumes are getting created might leave orphan volumes on the datastores.

    Workaround: If one hour wait is longer for SLA, restart the vSphere Container Storage Plug-in Pod to clean up pending vCenter Server cached task objects for which the session is already destroyed.

    Note: This action will leave an orphan volume on the datastore.

  • Attempts to create statically provisioned vSAN file share ReadWriteMany volumes fail when fsType is not specified in volume spec

    Static provisioning of ReadWriteMany volumes without fsType volume registration does not happen and thus pod cannot be created using such PV.

    Workaround: Set fsType as nfs4 or nfs in the volume spec during static provisioning of ReadWriteMany volumes.

check-circle-line exclamation-circle-line close-line
Scroll to top icon