After you install vSphere Container Storage Plug-in in a native Kubernetes cluster, you can upgrade the plug-in to a newer version.

Procedures in this section apply only to native, also called vanilla, Kubernetes clusters deployed in vSphere environment. To upgrade vSphere with Tanzu, see vSphere with Tanzu Configuration and Management.

vSphere Container Storage Plug-in Upgrade Considerations and Guidelines

When you perform an upgrade, follow these guidelines:
  • Be familiar with installation prerequisites and procedures for vSphere Container Storage Plug-in. See Preparing for Installation of vSphere Container Storage Plug-in and Deploy the vSphere Container Storage Plug-in on a Native Kubernetes Cluster.
  • Ensure that roles and privileges in your vSphere environment are updated. For more information, see vSphere Roles and Privileges.
  • To upgrade to vSphere Container Storage Plug-in 2.3.0, you need DNS forwarding configuration in CoreDNS ConfigMap to help resolve vSAN file share hostname. For more information, see Configure CoreDNS for vSAN File Share Volumes.
  • If you have RWM volumes backed by file service deployed using vSphere Container Storage Plug-in, remount the volumes before you upgrade vSphere Container Storage Plug-in.
  • When upgrading from Beta topology to GA in vSphere Container Storage Plug-in, follow these recommendations. For information about deployments with topology, see Deploy vSphere Container Storage Plug-in with Topology.
    • If you have used the topology feature in its Beta version on vSphere Container Storage Plug-in version 2.3 or earlier, upgrade vSphere Container Storage Plug-in to version 2.4.1 or later to be able to use the GA version of the topology feature.
    • If you have used the Beta topology feature and plan to upgrade vSphere Container Storage Plug-in to version 2.4.1 or later, continue using only the zone and region parameters.
    • If you do not specify Label for a particular topology category while using the zone and region parameters in the configuration file, vSphere Container Storage Plug-in assumes the default Beta topology behavior and applies failure-domain.beta.kubernetes.io/XYZ labels on the node. You do not need to make a mandatory configuration change before upgrading the driver from Beta topology to GA topology feature.
      Earlier vSphere Secret Configuration vSphere Secret Configuration Before the Upgrade Sample Labels on a Node After the Upgrade
      [Labels]
      region = k8s-region
      zone = k8s-zone
      [Labels]
      region = k8s-region
      zone = k8s-zone
      Name:               k8s-node-0179
      Roles:              <none>
      Labels:             failure-domain.beta.kubernetes.io/region=region-1
                          failure-domain.beta.kubernetes.io/zone=zone-a 
      Annotations: ....
    • If you intend to use the topology GA labels after upgrading to vSphere Container Storage Plug-in 2.4.1 or later, make sure you do not have any pre-existing StorageClasses or PV node affinity rules pointing to the topology beta labels in the environment and then make the following change in the vSphere configuration secret.
      Earlier vSphere Secret Configuration vSphere Secret Configuration Before the Upgrade Sample Labels on a Node After the Upgrade
      [Labels]
      region = k8s-region
      zone = k8s-zone
      [Labels]
      region = k8s-region
      zone = k8s-zone
      [TopologyCategory "k8s-region"]
      Label = "topology.kubernetes.io/region"
      [TopologyCategory "k8s-zone"]
      Label = "topology.kubernetes.io/zone"
      Name:               k8s-node-0179
      Roles:              <none>
      Labels:             topology.kubernetes.io/region=region-1
                          topology.kubernetes.io/zone=zone-a
      Annotations:        ....

Remount ReadWriteMany Volumes Backed by vSAN File Service

If you have RWM volumes backed by vSAN file service, use this procedure to remount the volumes before you upgrade vSphere Container Storage Plug-in.

This procedure is required if you upgrade from v2.0.1 to v2.3.0.
Note: If you upgrade from 2.3.0 to 2.4.0 or later, you do not need to perform these steps. In addition, upgrades from v2.2.2, v2.1.2, and v2.0.2 to version v2.3.0 or later do not require this procedure.

When you perform the following steps in a maintenance window, the process might disrupt active IOs on the file share volumes used by application pods. If you have multiple replicas of the pod that access the same file share volume, perform the following steps on each mount point serially to minimize downtime and disruptions.

Note: Use this task only when the vSphere Container Storage Plug-in node daemonset runs as a container. When it runs as a process on the TKGi platform, the task does not apply. However, you also need to perform these steps when TKGi is upgraded from the pod-based driver to process-based driver. For more information, see the following documentation at https://docs.pivotal.io/tkgi/1-12/vsphere-cns.html#uninstall-csi.

Procedure

  1. Find all RWM volumes on the cluster.
    # kubectl get pv -o wide | grep 'RWX\|ROX'
     pvc-7e43d1d3-2675-438d-958d-41315f97f42e   1Gi        RWX            Delete           Bound    default/www-web-0   file-sc                 107m   Filesystem
  2. Find all nodes where RWM volume is attached.
    In the following example, the volume is attached and mounted on the k8s-worker3 node.
    # kubectl get volumeattachment | grep pvc-7e43d1d3-2675-438d-958d-41315f97f42e
     csi-3afe670705e55e0679cba3f013c78ff9603333fdae6566745ea5f0cb9d621b20   csi.vsphere.vmware.com   pvc-7e43d1d3-2675-438d-958d-41315f97f42e   k8s-worker3   true       22s
  3. To discover where the volume is mounted, log in to the k8s-worker3 node VM and use the following command.
    root@k8s-worker3:~# mount | grep pvc-7e43d1d3-2675-438d-958d-41315f97f42e
     10.83.28.38:/52d7e15c-d282-3bae-f64d-8851ad9d352c on /var/lib/kubelet/pods/43686ba4-d765-4378-807a-74049fca39ee/volumes/kubernetes.io~csi/pvc-7e43d1d3-2675-438d-958d-41315f97f42e/mount type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.244.3.134,local_lock=none,addr=10.83.28.38)
  4. Unmount and remount volume at the same location with the same mount options used for mounting volume.
    For this step, you need to pre-install the nfs-common package on the worker VMs.
    1. Use the unmount -fl command to unmount the volume.
      root@k8s-worker3:~# umount -fl /var/lib/kubelet/pods/43686ba4-d765-4378-807a-74049fca39ee/volumes/kubernetes.io~csi/pvc-7e43d1d3-2675-438d-958d-41315f97f42e/mount
    2. Remount the volume with the same mount options used originally.
      root@k8s-worker3:~# mount -t nfs4 -o sec=sys,minorversion=1  10.83.28.38:/52d7e15c-d282-3bae-f64d-8851ad9d352c /var/lib/kubelet/pods/43686ba4-d765-4378-807a-74049fca39ee/volumes/kubernetes.io~csi/pvc-7e43d1d3-2675-438d-958d-41315f97f42e/mount
  5. Confirm the mount point is accessible from the node VM.
    root@k8s-worker3:~# mount | grep pvc-7e43d1d3-2675-438d-958d-41315f97f42e
     10.83.28.38:/52d7e15c-d282-3bae-f64d-8851ad9d352c on /var/lib/kubelet/pods/43686ba4-d765-4378-807a-74049fca39ee/volumes/kubernetes.io~csi/pvc-7e43d1d3-2675-438d-958d-41315f97f42e/mount type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.83.26.244,local_lock=none,addr=10.83.28.38)
    
     root@k8s-worker3:~# ls -la /var/lib/kubelet/pods/43686ba4-d765-4378-807a-74049fca39ee/volumes/kubernetes.io~csi/pvc-7e43d1d3-2675-438d-958d-41315f97f42e/mount
     total 4
     drwxrwxrwx 3 root root    0 Aug  9 16:48 .
     drwxr-x--- 3 root root 4096 Aug  9 18:40 ..
     -rw-r--r-- 1 root root    6 Aug  9 16:48 test

What to do next

After you have remounted all the vSAN file share volumes on the worker VMs, upgrade the vSphere Container Storage Plug-in by reinstalling its YAML files.

Upgrade vSphere Container Storage Plug-in of a Version Earlier than 2.3.0

If you use vSphere Container Storage Plug-in of a version earlier than 2.3.0, to perform an upgrade, you first need to uninstall the earlier version. You then install a version of your choice.

The following example illustrates an upgrade of vSphere Container Storage Plug-in from v2.2.0 to v2.3.0.

Procedure

  1. Uninstall the existing version of vSphere Container Storage Plug-in using https://github.com/kubernetes-sigs/vsphere-csi-driver/tags.
    kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/v2.2.0/manifests/v2.2.0/deploy/vsphere-csi-controller-deployment.yaml
    kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/v2.2.0/manifests/v2.2.0/deploy/vsphere-csi-node-ds.yaml
    kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/v2.2.0/manifests/v2.2.0/rbac/vsphere-csi-controller-rbac.yaml
    kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/v2.2.0/manifests/v2.2.0/rbac/vsphere-csi-node-rbac.yaml
    After you run the above commands, wait for the vSphere Container Storage Plug-in controller pod and vSphere Container Storage Plug-in node pods to be deleted completely.
  2. Install vSphere Container Storage Plug-in of your choice, for example, v2.3.0.
    1. Create a new namespace for vSphere Container Storage Plug-in.
      kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/v2.3.0/manifests/vanilla/namespace.yaml
    2. Copy vsphere-config-secret secret from kube-system namespace to the new namespace vmware-system-csi.
      kubectl get secret vsphere-config-secret --namespace=kube-system -o yaml | sed 's/namespace: .*/namespace: vmware-system-csi/' | kubectl apply -f -
    3. Delete vsphere-config-secret secret from kube-system namespace.
      kubectl delete secret vsphere-config-secret --namespace=kube-system
    4. Install vSphere Container Storage Plug-in.
      kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/v2.3.0/manifests/vanilla/vsphere-csi-driver.yaml

Upgrade vSphere Container Storage Plug-in of a Version 2.3.0 or Later

If you use vSphere Container Storage Plug-in of a version 2.3.0 or later, you can perform rolling upgrades.

Procedure

  1. If needed, apply changes to vsphere-config-secret in the vmware-system-csi namespace.
  2. Apply any necessary changes to the manifest pertaining to the release that you wish to use.
    For example, adjust the replica count in the vsphere-csi-controller deployment depending upon the number of control plane nodes in the cluster.
  3. To upgrade vSphere Container Storage Plug-in 2.3.0 or later, run the following command.
    The following example uses 2.7.0 as a target version, but you can substitute it with any other version later than 2.3.0.
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/v2.7.0/manifests/vanilla/vsphere-csi-driver.yaml
    Note: To be able to take advantage of the latest bug fixes and feature updates, make sure to use the most recent version of vSphere Container Storage Plug-in. For versions and updates, check VMware vSphere Container Storage Plug-in 2.7 Release Notes .

Enable Volume Snapshot and Restore After an Upgrade to Version 2.5.x or Later

If you have upgraded vSphere Container Storage Plug-in from version 2.4.x to version 2.5.x or later, you can enable the volume snapshot and restore feature.

Procedure

  1. If you haven't previously enabled the snapshot feature and installed snapshot components, perform the following steps:
    1. Enable the volume snapshot and restore functionality.
    2. Configure maximum number of snapshots per volume.
  2. If you have enabled the snapshot feature or if any snapshot components exist in the setup, follow these steps:
    1. Manually upgrade snapshot-controller and snapshot-validation-deployment to version 5.0.1.
    2. Enable the volume snapshot and restore functionality.
    3. Configure maximum number of snapshots per volume.