VMware vSphere Container Storage Plug-in 2.2 | 05 NOV 2021

Check for additions and updates to these release notes.

About 2.2 Release Notes

These Release Notes cover versions 2.2.x of VMware vSphere Container Storage Plug-in, previously called vSphere CSI Driver.

What's New

Version What's New
Version 2.2.4
  • Added protection for in-tree vSphere volumes that are migrated to vSphere Container Storage Plug-in when deleting node VMs. The migrated volume will not be deleted even after deleting node VMs.
  • Fixed unmounting of in-tree in-line vSphere volumes when vSphere Container Storage Plug-in migration is enabled.
Version 2.2.3
  • No new features are released in version 2.2.3. This is a patch release to fix critical issues.
  • If you use the in-tree volume plug-in, upgrade to this version 2.2.3.
Version 2.2.2
Version 2.2.1 No new features are released in version 2.2.1. This is a patch release to fix a critical detach volume error observed in version 2.2.0. See https://github.com/kubernetes-sigs/vsphere-csi-driver/pull/840.
Version 2.2.0
  • Online volume expansion. 
  • Added support for running vSphere Container Storage Plug-in on VMware Cloud™ on AWS (VMC). VMC will only support block volume for now. The minimum SDDC version required to run vSphere Container Storage Plug-in is 1.12. Refer to VMware Cloud™ on AWS Release Notes to get more details.
  • Added datastore privilege check to filter out shared datastores which do not have Datastore.FileManagement privilege.
  • Added telemetry collection support using cluster-distribution field in the vSphere Config Secret.

Deployment Files

Kubernetes Releases

  • Minimum: 1.18
  • Maximum: 1.20

Sidecar Container Versions

  • csi-provisioner - v2.1.0
  • csi-attacher - v3.1.0
  • csi-resizer - v1.1.0
  • livenessprob - v2.2.0
  • csi-node-driver-registrar - v2.1.0

Resolved Issues

Release 2.2.4

Release 2.2.3

Release 2.2.2

vSAN file share volumes become inaccessible after Node DaemonSet Pod of vSphere Container Storage Plug-in restarts. For more information, see https://github.com/kubernetes-sigs/vsphere-csi-driver/issues/1216.

Release 2.2.0

  • When the static persistent volume is re-created with the same PV name, volume is not getting registered as a container volume with vSphere.
  • Metadata syncer container deletes the volume physically from the datastore when you delete a Persistent Volume with Bound status and reclaim policy Delete while StorageObjectInUseProtection is disabled on Kubernetes Cluster.
  • When the in-tree vSphere plug-in is configured to use default datastore in /datacenter-name/datastore/default-datastore format, migration of the volume fails
  • When a pod that uses a PVC is rescheduled to other node while the metadatasyncer is down, fullsync might fail with an error.
  • If no datastores are present in any of the data centers in your vSphere environment, file volume provisioning fails with the failed to get all the datastores error.

Known Issues

  • After you delete a Pod with an in-tree inline vSphere volume, the Pod permanently remains in the terminating state

    Workaround: Forcefully delete the pod and manually unmount the associated volumes from the node VM.

    This issue is fixed in vSphere Container Storage Plug-in version 2.5.0. For more information, see https://github.com/kubernetes-sigs/vsphere-csi-driver/issues/1466.

  • Persistent volume fails to be detached from a node

    This problem might occur when the Kubernetes node object or node VM on vCenter Server have been deleted without draining the Kubernetes node.

    Workaround:

    1. Detach the volume manually from the node VM associated with the deleted node object.
    2. Delete the VolumeAttachment object associated with the deleted node object.
      kubectl patch volumeattachments.storage.k8s.io csi-<uuid> -p '{"metadata":{"finalizers":[]}}' --type=merge 
      kubectl delete volumeattachments.storage.k8s.io csi-<uuid>

  • A PV provisioned before vSphere Container Storage Plug-in migration was enabled fails to be attached after the migration is enabled

    This happens only when the datastore name contains characters that trim the trailing characters from the vmdk path. When the vmdk path is trimmed, the volume cannot register with the CNS. As a result, the volume cannot be attached to the node using vSphere Container Storage Plug-in.

    For more information, see https://github.com/kubernetes-sigs/vsphere-csi-driver/pull/1451.

    Workaround: This issue is resolved in version 2.4.1. Upgrade vSphere Container Storage Plug-in to version 2.4.1.

  • CreateVolume request fails with an error after you change the hostname or IP of vCenter Server in the vsphere-config-secret

    After you change the hostname or IP of vCenter Server in the vsphere-config-secret and then try to create a persistent volume claim, the action fails with the following error:

    failed to get Nodes from nodeManager with err virtual center wasn't found in registry

    You can observe this issue in the following releases of vSphere Container Storage Plug-in: v2.0.2, v2.1.2 , v2.2.2 , v2.3.0, and v2.4.0.

    Workaround: Restart the vsphere-csi-controller pod by running the following command:

    kubectl rollout restart deployment vsphere-csi-controller -n kube-system

  • A migrated in-tree vSphere volume deleted by the in-tree vSphere plug-in remains on the CNS view of the vSphere Client

    Migrated in-tree vSphere volumes deleted by the in-tree vSphere plug-in remain on the CNS view of the vSphere Client

    Workaround: As a vSphere administrator, reconcile discrepancies in the Managed Virtual Disk Catalog. Follow this KB article: https://kb.vmware.com/s/article/2147750.

  • Volume expansion might fail when it is called simultaneously with pod creation

    This issue occurs when you resize the PVC and create a pod using that PVC simultaneously. In this case, pod creation might be completed first using the PVC with original size. Volume expansion fails because online resize is not supported in vSphere 7.0 Update 1.

    Workaround: Wait for the PVC to reach the FileVolumeResizePending condition before attaching a pod to it.

  • Deleting a PV before deleting a PVC leaves orphan volume on the datastore

    Orphan volumes remain on the datastore. An administrator needs to delete those volumes manually using govc command.

    The upstream issue is tracked at: https://github.com/kubernetes-csi/external-provisioner/issues/546.

    Workaround: Do not attempt to delete a PV bound to a PVC. Only delete a PV if you know that the underlying volume in the storage system is gone.

    If you accidentally left orphan volumes on the datastore and know the volume handles or First Class Disk IDs of deleted PVs, the storage administrator can delete the volumes using the govc disk.rm volume-handle/FCD ID command .

  • vSAN file share volumes become inaccessible after Node DaemonSet Pod of vSphere Container Storage Plug-in restarts

    Pod can not write/read data on vSAN file share volumes, ReadWriteMany and ReadOnlyMany vSphere Container Storage Plug-in volumes

    Workaround: Remount vSAN file services volumes used by the application pods at the same location from the Node VM guest OS directly.

    This issue is resolved in version 2.2.2.

  • user, ca-file , and VirtualCenter change in the vsphere-config-secret is not honored until the vSphere Container Storage Plug-in Pod restarts

    Volume lifecycle operations fail until the controller pod restarts.

    Workaround: Restart vsphere-csi-controller deployment pod.

  • A restart of vCenter Server might leave Kubernetes persistent volume claims in pending state

    Persistent volume claims that are being created at the time vCenter Server is restarting might remain in pending state for one hour.

    After vSphere Container Storage Plug-in clears up pending cached tasks, new tasks to create volumes will be issued to the vCenter Server system, and then persistent volume claims can go into Bound State.

    Restarting vCenter Server while volumes are getting created might leave orphan volumes on the datastores.

    Workaround: If one hour wait is longer for SLA, restart the vSphere Container Storage Plug-in Pod to clean up pending vCenter Server cached task objects for which the session is already destroyed.

    Note: This action will leave an orphan volume on the datastore.

check-circle-line exclamation-circle-line close-line
Scroll to top icon