VMware vSphere Container Storage Plug-in 2.3 | 05 NOV 2021

Check for additions and updates to these release notes.

About 2.3 Release Notes

These Release Notes cover 2.3.x versions of VMware vSphere Container Storage Plug-in, previously called vSphere CSI Driver.

What's New

Version

What's New

Version 2.3.2

  • Added protection for in-tree vSphere volumes that are migrated to vSphere Container Storage Plug-in after deleting node VMs.

  • Fixed unmounting of in-tree in-line vSphere volumes when vSphere Container Storage Plug-in migration is enabled.

Version 2.3.1

  • No new features are released in version 2.3.1. This is a patch release to fix critical issues.

  • If you use the in-tree volume plug-in, upgrade to this version 2.3.1.

Version 2.3.0

Deployment Files

Important:

To ensure proper functionality, do not update the internal-feature-states.csi.vsphere.vmware.com configmap available in the deployment YAML file. VMware does not recommend to activate or deactivate features in this configmap.

Version

Deployment File

Version 2.3.2

https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/v2.3.2/manifests/vanilla

Version 2.3.1

https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/v2.3.1/manifests/vanilla

Version 2.3.0

https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/v2.3.0/manifests/vanilla

Kubernetes Release

  • Minimum: 1.19

  • Maximum: 1.21

Supported Sidecar Container Versions

  • csi-provisioner - v2.2.0

  • csi-attacher - v3.2.0

  • csi-resizer - v1.1.0

  • livenessprobe - v2.2.0

  • csi-node-driver-registrar - v2.1.0

Resolved Issues

Version

Resolved Issues

Version 2.3.2

Version 2.3.1

Version 2.3.0

Known Issues

  • Volume provisioning using the datastoreURL parameter in StorageClass does not work correctly when this datastoreURL points to a shared datastore mounted across datacenters

    If nodes span across multiple datacenters, the vSphere Container Storage plug-in fails to find shared datastores accessible to all nodes. This issue occurs because datastores shared across datacenters have different datastore MoRefs, but use the same datastore URL. Current logic in the vSphere Container Storage plug-in does not consider this fact.

    Workaround: Do not spread node VMs across multiple datacenters.

  • VCP detach will not detach the volume on a migrated static VCP PV on vSAN when the volume path contains a folder name instead of folder id.

    This issue occurs when VCP tries to detach a migrated static VCP PV on vSAN where its volume path contains a folder name instead of folder id. The detach operation performed by VCP will not detach the vmdk. This results in a subsequent volume deletion/re-attach failure with the error message "The resource volume is in use".

    Workaround:

    Manually detach the vmdk from k8s worker VM via vSphere before deleting the PV.

    Example:

    VolumePath: [vsanDatastore] e2e/test-1655224652745708493-540.vmdk will not work.

    VolumePath: [vsanDatastore] c358a862-7072-2e03-6f3f-0200663dfb1b/test-1655224652745708493-540.vmdk works.

  • Persistent volume fails to be detached from a node

    This problem might occur when the Kubernetes node object or node VM on vCenter Server have been deleted without draining the Kubernetes node.

    Workaround:

    1. Detach the volume manually from the node VM associated with the deleted node object.

    2. Delete the VolumeAttachment object associated with the deleted node object.

      kubectl patch volumeattachments.storage.k8s.io csi-<uuid> -p '{"metadata":{"finalizers":[]}}' --type=merge 
      kubectl delete volumeattachments.storage.k8s.io csi-<uuid>

  • A PV provisioned before vSphere Container Storage Plug-in migration was enabled fails to be attached after the migration is enabled

    This happens only when the datastore name contains characters that trim the trailing characters from the vmdk path. When the vmdk path is trimmed, the volume cannot register with the CNS. As a result, the volume cannot be attached to the node using vSphere Container Storage Plug-in.

    For more information, see https://github.com/kubernetes-sigs/vsphere-csi-driver/pull/1451.

    Workaround: This issue has been resolved in version 2.4.1. Upgrade vSphere Container Storage Plug-in to version 2.4.1.

  • CreateVolume request fails with an error after you change the hostname or IP of vCenter Server in the vsphere-config-secret

    After you change the hostname or IP of vCenter Server in the vsphere-config-secret and then try to create a persistent volume claim, the action fails with the following error:

    failed to get Nodes from nodeManager with err virtual center wasn't found in registry

    You can observe this issue in the following releases of vSphere Container Storage Plug-in: v2.0.2, v2.1.2 , v2.2.2 , v2.3.0, and v2.4.0.

    Workaround: Restart the vsphere-csi-controller pod by running the following command:

    kubectl rollout restart deployment vsphere-csi-controller -n vmware-system-csi

  • A migrated in-tree vSphere volume deleted by the in-tree vSphere plug-in remains on the CNS view of the vSphere Client

    Migrated in-tree vSphere volumes deleted by the in-tree vSphere plug-in remain on the CNS view of the vSphere Client

    Workaround: As a vSphere administrator, reconcile discrepancies in the Managed Virtual Disk Catalog. Follow this KB article: https://kb.vmware.com/s/article/2147750.

  • Volume expansion might fail when it is called simultaneously with pod creation

    This issue occurs when you resize the PVC and create a pod using that PVC simultaneously. In this case, pod creation might be completed first using the PVC with original size. Volume expansion fails because online resize is not supported in vSphere 7.0 Update 1.

    Workaround: Wait for the PVC to reach the FileVolumeResizePending condition before attaching a pod to it.

  • Deleting a PV before deleting a PVC leaves orphan volume on the datastore

    Orphan volumes remain on the datastore. An administrator needs to delete those volumes manually using govc command.

    The upstream issue is tracked at: https://github.com/kubernetes-csi/external-provisioner/issues/546.

    Workaround: Do not attempt to delete a PV bound to a PVC. Only delete a PV if you know that the underlying volume in the storage system is gone.

    If you accidentally left orphan volumes on the datastore and know the volume handles or First Class Disk IDs of deleted PVs, the storage administrator can delete the volumes using the govc disk.rm volume-handle/FCD ID command .

  • After deleting a standalone pod with an inline volume, migrated vSphere volume does not get cleaned up immediately

    This problem is related to the following Kubernetes issue: https://github.com/kubernetes/kubernetes/issues/103745.

    As a result of this issue, migrated vSphere volume remains in Terminating state in the cluster for a long time.

    Workaround: Wait for the pod to be garbage collected, or force delete the pod.

  • VirtualCenter change in the vsphere-config-secret is not honored until the vSphere Container Storage Plug-in Pod restarts

    Volume lifecycle operations fail until the controller pod restarts.

    Workaround: Restart vsphere-csi-controller deployment pod.

  • A restart of vCenter Server might leave Kubernetes persistent volume claims in pending state

    Persistent volume claims that are being created at the time vCenter Server is restarting might remain in pending state for one hour.

    After vSphere Container Storage Plug-in clears up pending cached tasks, new tasks to create volumes will be issued to the vCenter Server system, and then persistent volume claims can go into Bound State.

    Restarting vCenter Server while volumes are getting created might leave orphan volumes on the datastores.

    Workaround: If one hour wait is longer for SLA, restart the vSphere Container Storage Plug-in Pod to clean up pending vCenter Server cached task objects for which the session is already destroyed.

    Note: This action will leave an orphan volume on the datastore.

check-circle-line exclamation-circle-line close-line
Scroll to top icon