This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware vSphere Container Storage Plug-in 2.6 | 22 NOV 2023

Check for additions and updates to these release notes.

About 2.6 Release Notes

These Release Notes cover 2.6.x versions of VMware vSphere Container Storage Plug-in, previously called vSphere CSI Driver.

For known issues observed in Kubernetes, see vSphere CSI Driver - Known Issues.

What's New

Version

What's New

Version 2.6.4

  • Telemetry enhancements. Determine cluster distribution server version and type using the Kubernetes API server version.

  • Fixed the issue that occurs during new volume creation when a deleted node is added back to the cache.

  • Validate vCenter user name and disallow user name without a domain name.

  • Fixes the device not found error during the volume detach operation.

Version 2.6.3

Includes fixes for CVE-2022-32149, CVE-2022-27664, and CVE-2022-21698.

Version 2.6.2

  • Support node VM discovery when CSI controller is upgraded to v2.5.0 and later, but some of the Node's Daemonset Pods are not upgraded to v2.5.0 and later due to the upgrade process on the platform such as VMware Tanzu Kubernetes Grid Integrated Edition. For details, see items 1971 and 1966 in GitHub.

  • Ability to suspend a specific datastore for volume provisioning using Cloud Native Storage Manager.

  • Fixes to CSI Unix Domain Socket endpoint URL parsing for Windows Node Daemonset Pods. Refer to 1998 for more details.

  • Fixes to ensure that correct Node UUID for Windows Node is published. See 1996 for more details.

Version 2.6.1

  • Improvements to the idempotency feature that allow the vSphere Container Storage Plug-in to run in a namespace other than vmware-system-csi. For more details, see 1946 in GitHub.

  • Changes that prevent re-discovering of the node topology after a syncer container restart. With this fix, you can use the Preferential Datastore feature. The node's topology is retained even when a node VM moves from one availability zone to another availability zone. For more details, see 1906 in GitHub.

  • A volume detach operation is marked as success when a node VM is deleted from vCenter Server. For more details, see 1879 in GitHub.

Version 2.6.0

Deployment Files

Important:

To ensure proper functionality, do not update the internal-feature-states.csi.vsphere.vmware.com configmap available in the deployment YAML file. VMware does not recommend to activate or deactivate features in this configmap.

Version

File

Version 2.6.4

https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/v2.6.4/manifests/vanilla

Version 2.6.3

https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/v2.6.3/manifests/vanilla

Version 2.6.2

https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/v2.6.2/manifests/vanilla

Version 2.6.1

https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/v2.6.1/manifests/vanilla

Version 2.6.0

https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/v2.6.0/manifests/vanilla

Kubernetes Release

  • Minimum: 1.22

  • Maximum: 1.24

  • Note: If you upgrade the Kubernetes version to 1.24, drain one node at a time, upgrade the kubelet, and then uncordon the node. You must perform these steps for all nodes in the cluster. For more information, see Kubernetes PR 107065.

Supported Sidecar Containers Versions

  • csi-provisioner: v3.2.1

  • csi-attacher: v3.4.0

  • csi-resizer: v1.4.0

  • livenessprobe: v2.7.0

  • csi-node-driver-registrar: v2.5.1

  • csi-snapshotter: v5.0.1

Resolved Issues

Version

Resolved Issues

Version 2.6.2

  • Persistent volume fails to be detached from a node.

  • When a Kubernetes node VM is removed from vCenter Server, PVC delete operations fail.

Version 2.6.1

  • Persistent volume fails to be detached from a node.

  • When a Kubernetes node VM is removed from vCenter Server, PVC delete operations fail.

Known Issues

  • The health status for container file volumes may remain in the Unknown state

    In the vCenter UI, the health status for container file volumes may remain in the Unknown state up to an hour.

    Workaround: Click Retest in the vCenter UI to view the latest health check results.

  • Delete volume tasks for an already deleted volume continue to fail in the vCenter UI

    Persistent Volumes remain in the Supervisor cluster in the Terminating state even after all PVCs are deleted. vCenter tasks view are filled with failed delete volume tasks.

    Workaround:

    1. Authenticate with the Supervisor cluster using the vSphere plug-in for kubectl:

      kubectl vsphere login --server=IP-ADDRESS --vsphere-username USERNAME

    2. Get the Persistent Volume name that is in the Terminating state:

      kubectl get pv

    3. Note down the volume handle from the persistent volume:

      kubectl describe pv <pv-name>

    4. Delete the CnsVolumeOperationRequest Custom resource in the Supervisor cluster:

      kubectl delete cnsvolumeoperationrequest delete-<volume-handle>

  • Pod is stuck in the Error state when there is a site failure

    During the mount operation, when there is a site failure, the Pod is stuck in the Error state due to volume not being found with the error "disk: <disk-uuid> not attached to node".

    Workaround:

    Delete and recreate the pod when the nodes from the failed site are restarted on another site or a node is recovered on the existing site.

  • Volume provisioning using the datastoreURL parameter in StorageClass does not work correctly when this datastoreURL points to a shared datastore mounted across datacenters

    In a non-topology setup, if nodes span across multiple datacenters, the vSphere Container Storage plug-in fails to find shared datastores accessible to all nodes. This issue occurs because datastores shared across datacenters have different datastore MoRefs, but use the same datastore URL. Current logic in the vSphere Container Storage plug-in does not consider this fact.

    In a topology setup, you can have multiple availability zones across datacenters with datastores shared across availability zones. If the volume is provisioned on that shared datastore, the vSphere Container Storage plug-in might end up publishing node affinity rules for the availability zone that are different from what you have requested in the StorgeClass.

    Workaround:

    • In a non-topology setup, do not spread node VMs across multiple datacenters.

    • In a topology setup spanning across multiple datacenters, do not keep datastores shared across multiple datacenters.

  • After a vSphere upgrade, vSphere Container Storage Plug-in might not pick up new vSphere features

    After a vSphere upgrade is performed in the background, the vSphere Container Storage Plug-in controller deployment needs to be restarted. This action is required to make vSphere Container Storage Plug-in pick up the new features compatible with the vSphere version.

    Workaround:

    Run the following command: kubectl rollout restart deployment vsphere-csi-controller -n vmware-system-csi

  • Persistent volume fails to be detached from a node (resolved in v2.6.1 and v2.6.2)

    This problem might occur when the Kubernetes node object along with the node VM on vCenter Server have been deleted without draining the Kubernetes node.

    When only the node object is deleted from Kubernetes, volumes get detached from the node VM present on vCenter Server.

    Workaround:

    Delete the VolumeAttachment object associated with the deleted node object.

    kubectl patch volumeattachments.storage.k8s.io csi-<uuid> -p '{"metadata":{"finalizers":[]}}' --type=merge 
    kubectl delete volumeattachments.storage.k8s.io csi-<uuid>

    The best practice is to drain the Kubernetes node object before removing it from the Kubernetes cluster or deleting from the vCenter Server inventory.

  • Under certain conditions, you might be able to provision more than three snapshots per volume despite default limitations

    By default, vSphere Container Storage Plug-in allows a maximum of three snapshots per volume. This limitation is applicable only when snapshot requests are at different time intervals. If you send multiple and parallel requests to create a snapshot for the volume at the same time, the system allows you to provision more than three snapshots per volume. Although no volume or snapshot operations are impacted, exceeding the maximum number of snapshots per volume is not recommended.

    Workaround: Avoid creating more than three snapshots on a single volume.

  • When a Kubernetes node VM is removed from vCenter Server, PVC delete operations fail (resolved in v2.6.1 and v2.6.2)

    When a Kubernetes node VM is removed from vCenter Server while volumes are attached to it, persistence volume claims associated with those volumes can not be deleted until you remove the volume attachment objects associated with the persistence volume claims.

    Workaround:

    1. Delete volume attachments associated with the problematic node.

      kubectl patch volumeattachments.storage.k8s.io csi-<uuid> -p '{"metadata":{"finalizers":[]}}' --type=merge
      kubectl delete volumeattachments.storage.k8s.io csi-<uuid>
      
    2. Remove the node from the Kubernetes cluster.

    The best practice is to drain the Kubernetes node object before removing it from the Kubernetes cluster or deleting from the vCenter Server inventory.

  • Mounting a PVC on a Windows node fails with Size Not Supported error

    If the PVC was expanded before a filesystem was created on it, the pod using this PVC fails while mounting the volume and remains in Pending state. You can observe the error of the following type:

    MountVolume.MountDevice failed while expanding volume for volume "pvc-021b5d9c-ba3a-4c9e-ae22-df2cbfce9e67" : Expander.NodeExpand failed to expand the volume : rpc error: code = Internal desc = error when resizing filesystem on volume "6d5caf4c-36ef-49f1-a804-0269c785b40d" on node: error when resizing filesystem on devicePath \\?\Volume{3df70ab1-1cfe-4047-bd7f-a5d6e44b754e}\ and volumePath \var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-021b5d9c-ba3a-4c9e-ae22-df2cbfce9e67\globalmount, err: rpc error: code = Unknown desc = error resizing volume. cmd: Get-Volume -UniqueId "\\?\Volume{3df70ab1-1cfe-4047-bd7f-a5d6e44b754e}\" | Get-Partition | Resize-Partition -Size 3204431360, output: Resize-Partition : Size Not Supported

    Workaround: Delete and re-create the PVC with the expanded size to use it in applications.

  • When a site failure occurs, pods that were running on the worker nodes in that site remain in Terminating state

    When a site failure causes all Kubernetes nodes and ESXi hosts in the cluster on that site to fail, the pods that were running on the worker nodes in that site will be stuck in Terminating state.

    Workaround: Start some of the ESXi hosts in the site as soon as possible, so that vSphere HA can restart the failed Kubernetes nodes. This action ensures that the replacement pods begin to come up.

  • After a recovery from network partition or host failure, some nodes in the cluster do not have INTERNAL-IP or EXTERNAL-IP

    After a recovery from a network partition or host failure, CPI is unable to assign INTERNAL-IP or EXTERNAL-IP to the node when it is added back to the cluster.

    Workaround:

    1. De-register the affected node.

      # kubectl delete node node-name

    2. Re-register the affected node by restarting kubelet service within the affected node.

      # systemctl restart kubelet

    3. Wait for node to register with the cluster.

    4. Taint the affected nodes.

      # kubectl taint node node-name node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule

    5. Wait for CPI to initialize the node. Make sure ProviderID is set and IP address is present for the node.

  • After recovering from a network partition or host failure, the control plane node becomes a worker node

    During network partition or host failure, the CPI might delete the node from the Kubernetes cluster if a VM is not found. After the recovery, the control plane node might become the worker node. Because of this, the pods tend to get scheduled on it unexpectedly.

    You can fix this issue in two ways:

    Workaround 1

    1. Taint and add labels to the affected nodes.

      # kubectl taint node <node name> node-role.kubernetes.io/control-plane:NoSchedule

    2. Delete the node from cluster.

      # kubeclt delete node <node name>

    3. Restart kublet service within the affected node.

      # systemctl restart kubelet

    4. Wait for the node to register with the cluster and add labels to the affected node.

      # kubectl label node <node name> node-role.kubernetes.io/control-plane=

    5. Delete the application pods which are already scheduled on the control plane node to get scheduled on new worker nodes.

      # kubectl delete pod <pod name>

    Workaround 2

    1. Add the environment variable with SKIP_NODE_DELETION=true.

      # kubectl set env daemonset vsphere-cloud-controller-manager -n kube-system SKIP_NODE_DELETION=true

    2. Verify whether the environment variable has been applied correctly.

      # kubectl describe daemonset vsphere-cloud-controller-manager -n kube-system

    3. Terminate the running pods. The next pod that you create will pull the environment variable.

      # kubectl delete pod <pod name>

    4. Wait for the pod to start.

    5. View logs with `kubectl logs [POD_NAME] -n kube-system`, and confirm if everything is healthy.

    Note: If you use the second method, it might result in leftover nodes and might introduce unexpected behaviors.

  • When a Kubernetes worker node shuts down non-gracefully, pods on that node remain in Terminating state

    Pods will not be rescheduled to other healthy worker nodes. As a result, the application might face a downtime or run in degraded mode. This depends on the number of replicas of the application present on the worker node that experiences non-graceful shut down.

    Workaround: Forcefully delete the pods that remain in terminating state.

  • After recovery from a network partition or host failure, pods might remain in containerCreating state

    During a network partition or host failure, CPI might delete the node from the Kubernetes cluster if a VM is not found. After recovery, the nodes might not be automatically added back to the cluster. This results in pods remaining in containerCreating state with the error message "Volume not attached according to node status for volume" or "".

    Workaround:

    If the issue occurs on a control plane node, perform the following steps.

    1. Restart kubelet service within the affected node.

      # systemctl restart kubelet

    2. Wait for the node to register with the cluster. Add labels and taints to the affected node.

      # kubectl taint node <node name> node-role.kubernetes.io/control-plane:NoSchedule

      # kubectl label node <node name> node-role.kubernetes.io/control-plane=

    If the issue affects a worker node, perform the following steps.

    1. Restart kubelet service within the affected node.

      # systemctl restart kubelet

    2. Taint the affected node(s).

      # kubectl taint node <node-name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule

  • Changes to Datacenter and Port entries in the vsphere-config-secret are not applied until you restart the vSphere Container Storage Plug-in pod

    After you make changes to the Datacenter and Port entries in the vsphere-config-secret, volume life cycle operations fail.

    Workaround: Restart the vsphere-csi-controller deployment pod.

  • Volume life cycle operations might be delayed during vSAN network partitioning

    You can observe some delays in Pod creation and Pod deletion during network partitioning on a vSAN cluster. After vSphere Container Storage Plug-in retries all failed operations, the operations succeed.

    This issue might occur because vCenter Server cannot reach the correct host during network partitioning. The volume fails to be created if the request reaches a host that cannot create the volume. However, during a Kubernetes retry, the volume can be created if it reaches the right host.

    Workaround: None.

  • CreateVolume request fails with an error after you change the hostname or IP of vCenter Server in the vsphere-config-secret

    After you change the hostname or IP of vCenter Server in the vsphere-config-secret and then try to create a persistent volume claim, the action fails with the following error:

    failed to get Nodes from nodeManager with err virtual center wasn't found in registry

    You can observe this issue in the following releases of vSphere Container Storage Plug-in: v2.0.2, v2.1.2 , v2.2.2 , v2.3.0, and v2.4.0.

    Workaround: Restart the vsphere-csi-controller pod by running the following command:

    kubectl rollout restart deployment vsphere-csi-controller -n vmware-system-csi

  • When you perform various operations on a volume or a node VM, you might observe error messages that appear in vCenter Server

    vCenter Server might display the following error messages:

    • When attaching a volume: com.vmware.vc.InvalidController : "The device '0' is referring to a nonexisting controller '1,001'."

    • When detaching a volume: com.vmware.vc.NotFound : "The object or item referred to could not be found."

    • When resizing a volume: com.vmware.vc.InvalidArgument : "A specified parameter was not correct: spec.deviceChange.device"

    • When updating: com.vmware.vc.Timedout : "Operation timed out."

    • When reconfiguring a VM: com.vmware.vc.InsufficientMemoryResourcesFault : "The available Memory resources in the parent resource pool are insufficient for the operation."

    In addition, you can observe a few less frequent errors for the CSI migration feature specifically in 70u2.

    For update:

    • Cannot find the device '2,0xx', which is referenced in the edit or remove device operation.

    • A general system error occurred: Failed to lock the file: api = DiskLib_Open, _diskPath->CValue() = /vmfs/volumes/vsan:52c77e7d8115ccfa-3ec2df6cffce6713/782c2560-d5e7-0e1d-858a-ecf4bbdbf874/kubernetes-dynamic-pvc-f077b8cd-dbfb-4ba6-a9e8-d7d8c9f4c578.vmdk

    • The operation is not allowed in the current state.

    For reconfigure: Invalid configuration for device '0'.

    Workaround: Most of these errors are resolved after a retry from CSI.

  • A statefulset set replica pod remains in terminating state after you delete the statefulset

    Typically, the problem occurs after you perfrom the following steps:

    1. Create volumes in the Kubernetes cluster using vSphere Cloud Provider (VCP).

    2. Enable the CSIMigration feature flags on kube-controller-manager, kubelet, and install vSphere Container Storage Plug-in.

    3. Enable the csi-migration feature state to migrate the volumes that you previously created using VCP.

    4. Create a statefulset using the migrated volumes and continue to use them in the replica set pods.

    5. When you no longer need the application pods to run in the Kubernetes cluster, perform the delete operation on the statefulset.

    This action might occationally result in replica set pods to remain in terminating state.

    Workaround: Force delete the replica set pods in terminating state:

    kubectl delete pod replica-pod-name --force --grace-period=0

check-circle-line exclamation-circle-line close-line
Scroll to top icon