VMware vSphere Container Storage Plug-in 2.1 | 05 NOV 2021

Check for additions and updates to these release notes.

About 2.1 Release Notes

These Release Notes cover versions 2.1.0, 2.1.1, and 2.1.2 of VMware vSphere Container Storage Plug-in.

What's New

Version What's New
Version 2.1.2 No new features are released in version 2.1.2. This is a patch release to fix an issue observed in version 2.1.1.
Version 2.1.1 No new features are released in version 2.1.1. This is a patch release to fix a lock contention issue observed in version 2.1.0.
Version 2.1.0 CSI migration for in-tree vSphere volumes on vSphere 7.0 Update 1. This feature requires that you upgrade to Kubernetes 1.19 release.

Deployment Files

Kubernetes Releases

  • Minimum: 1.17
  • Maximum: 1.19

Note: For vSphere Container Storage Plug-in Migration, the minimum Kubernetes version requirement is 1.19.0.

Sidecar Container Versions

  • csi-provisioner - v2.0.0
  • csi-attacher - v3.0.0
  • csi-resizer - v1.0.0
  • livenessprob - v2.1.0
  • csi-node-driver-registrar - v2.0.1

Resolved Issues

Version Resolved Issue
Version 2.1.2 vSAN file share volumes become inaccessible after Node DaemonSet Pod of vSphere Container Storage Plug-in restarts.
Version 2.1.1 When a Pod is rescheduled to a new node, a lock contention might occur, which causes a delay in the volume getting detached from the old node and attached to the new node.
Version 2.1.0 When multiple PVCs and Pods with the same name are present on the cluster and a volume gets de-registered or lost from vCenter Server CNS database, Syncer does not re-register volume back.

Known Issues

  • CreateVolume request fails with an error after you change the hostname or IP of vCenter Server in the vsphere-config-secret

    After you change the hostname or IP of vCenter Server in the vsphere-config-secret and then try to create a persistent volume claim, the action fails with the following error:

    failed to get Nodes from nodeManager with err virtual center wasn't found in registry

    You can observe this issue in the following releases of vSphere Container Storage Plug-in: v2.0.2, v2.1.2 , v2.2.2 , v2.3.0, and v2.4.0.

    Workaround: Restart the vsphere-csi-controller pod by running the following command:

    kubectl rollout restart deployment vsphere-csi-controller -n kube-system

  • When the static persistent volume is re-created with the same PV name, volume is not getting registered as a container volume with vSphere

    As a result of this problem, attach or delete operations cannot be performed on this persistent volume.

    Workaround: Wait for one hour before re-creating the static persistent volume using the same name.

  • Persistent volume fails to be detached from a node

    This problem might occur when the Kubernetes node object or node VM on vCenter Server have been deleted without draining the Kubernetes node.

    Workaround:

    1. Detach the volume manually from the node VM associated with the deleted node object.
    2. Delete the VolumeAttachment object associated with the deleted node object.
      kubectl patch volumeattachments.storage.k8s.io csi-<uuid> -p '{"metadata":{"finalizers":[]}}' --type=merge 
      kubectl delete volumeattachments.storage.k8s.io csi-<uuid>

  • Metadata syncer container deletes the volume physically from the datastore when you delete a Persistent Volume with Bound status and reclaim policy Delete while StorageObjectInUseProtection is disabled on Kubernetes Cluster

    As a result, Persistent Volumes Claim goes in the lost status. The volume can not be recovered.

    Workaround: Do not disable StorageObjectInUseProtection and attempt to delete the Persistent Volume directly without deleting the PVC.

  • A migrated in-tree vSphere volume deleted by the in-tree vSphere plug-in remains on the CNS view of the vSphere Client

    Migrated in-tree vSphere volumes deleted by the in-tree vSphere plug-in remain on the CNS view of the vSphere Client

    Workaround: As a vSphere administrator, reconcile discrepancies in the Managed Virtual Disk Catalog. Follow this KB article: https://kb.vmware.com/s/article/2147750.

  • Volume expansion might fail when it is called simultaneously with pod creation

    This issue occurs when you resize the PVC and create a pod using that PVC simultaneously. In this case, pod creation might be completed first using the PVC with original size. Volume expansion fails because online resize is not supported in vSphere 7.0 Update 1.

    Workaround: Wait for the PVC to reach the FileVolumeResizePending condition before attaching a pod to it.

  • Deleting a PV before deleting a PVC leaves orphan volume on the datastore

    Orphan volumes remain on the datastore. An administrator needs to delete those volumes manually using govc command.

    The upstream issue is tracked at: https://github.com/kubernetes-csi/external-provisioner/issues/546.

    Workaround: Do not attempt to delete a PV bound to a PVC. Only delete a PV if you know that the underlying volume in the storage system is gone.

    If you accidentally left orphan volumes on the datastore and know the volume handles or First Class Disk IDs of deleted PVs, the storage administrator can delete the volumes using the govc disk.rm volume-handle/FCD ID command .

  • When the in-tree vSphere plug-in is configured to use default datastore in /datacenter-name/datastore/default-datastore format, migration of the volume fails

    The issue is mentioned here: https://github.com/kubernetes-sigs/vsphere-csi-driver/issues/628.

    When the issue occurs, the in-tree vSphere volumes fail to migrate successfully.

    The issue is not observed when the default datastore is configured in default-datastore-name format.

    Workaround: This issue is fixed in version 2.2.0. Upgrade the plug-in to 2.2.0.

  • When a Pod is rescheduled to a new node, a lock contention might occur, which causes a delay in the volume getting detached from the old node and attached to the new node

    When this issue occurs, rescheduled Pods remain in Pending state for a prolonged time.

    Workaround: This issue is fixed in version 2.1.1. Upgrade the plug-in to 2.1.1.

  • When a pod that uses a PVC is rescheduled to other node while the metadatasyncer is down, fullsync might fail with an error

    You might observe the following error: Duplicated entity for each entity type in one cluster is found.

    This issue might be due to CNS holding stale volume metadata.

    Workaround: This issue is fixed in version 2.2.0. Upgrade the plug-in to 2.2.0.

  • If no datastores are present in any of the data centers in your vSphere environment, file volume provisioning fails with the failed to get all the datastores error

    File volume provisioning by the vSphere Container Storage plug-in keeps failing.

    Workaround: Either remove the ReadOnly privilege on this data center for the user listed in the vsphere-config-secret secret or add a datastore to this data center. This issue is fixed in version 2.2.0.

  • vSAN file share volumes become inaccessible after Node DaemonSet Pod of vSphere Container Storage Plug-in restarts

    Pod can not write/read data on vSAN file share volumes, ReadWriteMany and ReadOnlyMany vSphere Container Storage Plug-in volumes

    Workaround: Remount vSAN file services volumes used by the application pods at the same location from the Node VM guest OS directly.

    This issue is resolved in version 2.1.2.

  • user, ca-file , and VirtualCenter change in the vsphere-config-secret is not honored until the vSphere Container Storage Plug-in Pod restarts

    Volume lifecycle operations fail until the controller pod restarts.

    Workaround: Restart vsphere-csi-controller deployment pod.

  • A restart of vCenter Server might leave Kubernetes persistent volume claims in pending state

    Persistent volume claims that are being created at the time vCenter Server is restarting might remain in pending state for one hour.

    After vSphere Container Storage Plug-in clears up pending cached tasks, new tasks to create volumes will be issued to the vCenter Server system, and then persistent volume claims can go into Bound State.

    Restarting vCenter Server while volumes are getting created might leave orphan volumes on the datastores.

    Workaround: If one hour wait is longer for SLA, restart the vSphere Container Storage Plug-in Pod to clean up pending vCenter Server cached task objects for which the session is already destroyed.

    Note: This action will leave an orphan volume on the datastore.

check-circle-line exclamation-circle-line close-line
Scroll to top icon