VMware vSphere Container Storage Plug-in 2.2 | 05 NOV 2021 Check for additions and updates to these release notes. |
These Release Notes cover versions 2.2.x of VMware vSphere Container Storage Plug-in, previously called vSphere CSI Driver.
Version | What's New |
---|---|
Version 2.2.4 |
|
Version 2.2.3 |
|
Version 2.2.2 |
|
Version 2.2.1 | No new features are released in version 2.2.1. This is a patch release to fix a critical detach volume error observed in version 2.2.0. See https://github.com/kubernetes-sigs/vsphere-csi-driver/pull/840. |
Version 2.2.0 |
|
Release 2.2.4
Release 2.2.3
Release 2.2.2
vSAN file share volumes become inaccessible after Node DaemonSet Pod of vSphere Container Storage Plug-in restarts. For more information, see https://github.com/kubernetes-sigs/vsphere-csi-driver/issues/1216.
Release 2.2.0
Bound
status and reclaim policy Delete
while StorageObjectInUseProtection
is disabled on Kubernetes Cluster./datacenter-name/datastore/default-datastore
format, migration of the volume failsfailed to get all the datastores
error.After you delete a Pod with an in-tree inline vSphere volume, the Pod permanently remains in the terminating state
Workaround: Forcefully delete the pod and manually unmount the associated volumes from the node VM.
This issue is fixed in vSphere Container Storage Plug-in version 2.5.0. For more information, see https://github.com/kubernetes-sigs/vsphere-csi-driver/issues/1466.
Persistent volume fails to be detached from a node
This problem might occur when the Kubernetes node object or node VM on vCenter Server have been deleted without draining the Kubernetes node.
Workaround:
VolumeAttachment
object associated with the deleted node object.kubectl patch volumeattachments.storage.k8s.io csi-<uuid> -p '{"metadata":{"finalizers":[]}}' --type=merge
kubectl delete volumeattachments.storage.k8s.io csi-<uuid>
A PV provisioned before vSphere Container Storage Plug-in migration was enabled fails to be attached after the migration is enabled
This happens only when the datastore name contains characters that trim the trailing characters from the vmdk path. When the vmdk path is trimmed, the volume cannot register with the CNS. As a result, the volume cannot be attached to the node using vSphere Container Storage Plug-in.
For more information, see https://github.com/kubernetes-sigs/vsphere-csi-driver/pull/1451.
Workaround: This issue is resolved in version 2.4.1. Upgrade vSphere Container Storage Plug-in to version 2.4.1.
CreateVolume request fails with an error after you change the hostname or IP of vCenter Server in the vsphere-config-secret
After you change the hostname or IP of vCenter Server in the vsphere-config-secret and then try to create a persistent volume claim, the action fails with the following error:
failed to get Nodes from nodeManager with err virtual center wasn't found in registry
You can observe this issue in the following releases of vSphere Container Storage Plug-in: v2.0.2, v2.1.2 , v2.2.2 , v2.3.0, and v2.4.0.
Workaround: Restart the vsphere-csi-controller
pod by running the following command:
kubectl rollout restart deployment vsphere-csi-controller -n kube-system
A migrated in-tree vSphere volume deleted by the in-tree vSphere plug-in remains on the CNS view of the vSphere Client
Migrated in-tree vSphere volumes deleted by the in-tree vSphere plug-in remain on the CNS view of the vSphere Client
Workaround: As a vSphere administrator, reconcile discrepancies in the Managed Virtual Disk Catalog. Follow this KB article: https://kb.vmware.com/s/article/2147750.
Volume expansion might fail when it is called simultaneously with pod creation
This issue occurs when you resize the PVC and create a pod using that PVC simultaneously. In this case, pod creation might be completed first using the PVC with original size. Volume expansion fails because online resize is not supported in vSphere 7.0 Update 1.
Workaround: Wait for the PVC to reach the FileVolumeResizePending
condition before attaching a pod to it.
Deleting a PV before deleting a PVC leaves orphan volume on the datastore
Orphan volumes remain on the datastore. An administrator needs to delete those volumes manually using govc
command.
The upstream issue is tracked at: https://github.com/kubernetes-csi/external-provisioner/issues/546.
Workaround: Do not attempt to delete a PV bound to a PVC. Only delete a PV if you know that the underlying volume in the storage system is gone.
If you accidentally left orphan volumes on the datastore and know the volume handles or First Class Disk IDs of deleted PVs, the storage administrator can delete the volumes using the govc disk.rm volume-handle/FCD ID
command .
vSAN file share volumes become inaccessible after Node DaemonSet Pod of vSphere Container Storage Plug-in restarts
Pod can not write/read data on vSAN file share volumes, ReadWriteMany and ReadOnlyMany vSphere Container Storage Plug-in volumes
Workaround: Remount vSAN file services volumes used by the application pods at the same location from the Node VM guest OS directly.
This issue is resolved in version 2.2.2.
user
, ca-file
, and VirtualCenter
change in the vsphere-config-secret
is not honored until the vSphere Container Storage Plug-in Pod restarts
Volume lifecycle operations fail until the controller pod restarts.
Workaround: Restart vsphere-csi-controller deployment pod.
A restart of vCenter Server might leave Kubernetes persistent volume claims in pending state
Persistent volume claims that are being created at the time vCenter Server is restarting might remain in pending state for one hour.
After vSphere Container Storage Plug-in clears up pending cached tasks, new tasks to create volumes will be issued to the vCenter Server system, and then persistent volume claims can go into Bound State.
Restarting vCenter Server while volumes are getting created might leave orphan volumes on the datastores.
Workaround: If one hour wait is longer for SLA, restart the vSphere Container Storage Plug-in Pod to clean up pending vCenter Server cached task objects for which the session is already destroyed.
Note: This action will leave an orphan volume on the datastore.