You can migrate in-tree vSphere volumes to vSphere Container Storage Plug-in. After you migrate the in-tree vSphere volumes, vSphere Container Storage Plug-in performs all subsequent operations on migrated volumes.

vSphere Container Storage Plug-in and CNS provide functionality that is not available with the in-tree vSphere volume plug-in. For information, see Supported Kubernetes Functionality and vSphere Functionality Supported by vSphere Container Storage Plug-in.

Note:
  • Migration of in-tree vSphere volumes to CSI does not work with Kubernetes version 1.29.0. See https://github.com/kubernetes/kubernetes/issues/122340.

    Use Kubernetes version 1.29.1 or later.

  • Kubernetes will deprecate the in-tree vSphere volume plug-in, and it will be removed in the future Kubernetes releases. Volumes provisioned using the vSphere in-tree plug-in will not get additional new features supported by the vSphere Container Storage Plug-in.
  • Kubernetes provides a seamless procedure to help migrate in-tree vSphere volumes to a vSphere Container Storage Plug-in. After you migrate the in-tree vSphere volumes vSphere Container Storage Plug-in, all subsequent operations on migrated volumes are performed by the vSphere Container Storage Plug-in. The migrated vSphere volume will not get additional capabilities supported by vSphere Container Storage Plug-in.

Considerations for Migration of In-Tree vSphere Volumes

When you prepare to use the vSphere Container Storage Plug-in migration, consider the following items.

  • vSphere version 7.0 p07 and vSphere version 8.0 Update 2 or later is recommended for In-tree vSphere volume migration to vSphere Container Storage Plug-in.
  • vSphere Container Storage Plug-in migration is released as a Beta feature in Kubernetes 1.19. For more information, see Release note announcement.
  • If you plan to use CSI version 3.0 or 3.1 for migrated volumes, use the latest patch version 3.0.3 or version 3.1.1. These patch versions include the fix for the issue https://github.com/kubernetes-sigs/vsphere-csi-driver/issues/2534. This issue occurs when both CSI migration and list-volumes functionality are enabled.
  • Kubernetes 1.19 release deprecates vSAN raw policy parameters for the in-tree vSphere volume plug-in. These parameters will be removed in a future release. For more information, see Deprecation Announcement.
  • The following vSphere in-tree StorageClass parameters are not supported after you enable migration:
    • hostfailurestotolerate
    • forceprovisioning
    • cachereservation
    • diskstripes
    • objectspacereservation
    • iopslimit
    • diskformat
  • You cannot rename or delete the storage policy consumed by an in-tree vSphere volume. Volume migration requires the original storage policy used for provisioning the volume to be present on vCenter Server for registration of volume as a container volume in vSphere.
  • Do not rename the datastore consumed by in-tree vSphere volume. Volume migration relies on the original datastore name present on the volume source for registration of volume as container volume in vSphere.
  • Make sure to add the following annotations before enabling migration for statically created vSphere in-tree Persistent Volume Claims, and Persistent Volumes. Statically provisioned in-tree vSphere volumes cannot be migrated to the vSphere Container Storage Plug-in without adding these annotations. This also applies to new static in-tree PVs and PVCs created after you enable migration.

    Annotation on PV:

    annotations:
      pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume

    Annotation on PVC:

    annotations:
      volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume
  • After migration, if you use vSphere releases earlier than 8.0 Update 1, the only supported value for diskformat parameter is thin. Volumes created before the migration with the disk format eagerzeroedthick or zeroedthick are migrated to CSI.

    Starting with vSphere 8.0 Update 1, you can use storage policies with thick volume requirement to migrate eagerzeroedthick or zeroedthick volumes. For more information, see Create a VM Storage Policy for VMFS Datastore in the vSphere Storage documentation.

  • vSphere Container Storage Plug-in does not support raw vSAN policy parameters. After you enable the migration, vSphere Container Storage Plug-in fails the volume creation activity when you request a new volume using in-tree provisioner and vSAN raw policy parameters.
  • The vSphere Container Storage Plug-in migration requires a compatible version of vSphere. For information, see Supported Kubernetes Functionality.
  • vSphere Container Storage Plug-in does not support formatting volumes with the Windows file system. In-tree vSphere volumes migrated using the Windows file system cannot be used with the vSphere Container Storage Plug-in.
  • In-tree vSphere volume plug-in relies on the name of the datastore set on the PVs source. After you enable migration, do not enable Storage DRS or vMotion. If Storage DRS moves a disk from one datastore to another, further volume operations might break.
  • If you use zone and region aware in-tree deployments, upgrade to vSphere Container Storage Plug-in version 2.4.1 and later.

    Before installing vSphere Container Storage Plug-in, add the following section to the vSphere secret configuration.

    Kubernetes Version

    In-Tree vSphere Secret Configuration

    vSphere Container Storage Plug-in Secret Configuration

    Sample Labels on vSphere Container Storage Plug-in Nodes after Installation

    1.21.x and below

    Earliest version is 1.19

    [Labels]
    region = k8s-region
    zone = k8s-zone
    [Labels]
    region = k8s-region
    zone = k8s-zone
    Name:               k8s-node-0179
    Roles:              <none>
    Labels:             failure-domain.beta.kubernetes.io/region=region-1
                        failure-domain.beta.kubernetes.io/zone=zone-a 
    Annotations: ....

    1.22.x and 1.23.x

    If all of your existing PVs have the GA label, use this approach.

    [Labels]
    region = k8s-region
    zone = k8s-zone
    [Labels]
    region = k8s-region
    zone = k8s-zone
    [TopologyCategory "k8s-region"]
    Label = "topology.kubernetes.io/region"
    [TopologyCategory "k8s-zone"]
    Label = "topology.kubernetes.io/zone"
    Name:               k8s-node-0179
    Roles:              <none>
    Labels:             topology.kubernetes.io/region=region-1
                        topology.kubernetes.io/zone=zone-a 
    Annotations: ....

    1.24.x

    If the Kubernetes cluster has PVs with either beta or GA labels, you can migrate to vSphere Container Storage Plug-in using the following configuration.

    [Labels]
    region = k8s-region
    zone = k8s-zone
    [Labels]
    region = k8s-region
    zone = k8s-zone
    [TopologyCategory "k8s-region"]
    Label = "topology.csi.vmware.com/region"
    [TopologyCategory "k8s-zone"]
    Label = "topology.csi.vmware.com/zone"
    Name:               k8s-node-0179
    Roles:              <none>
    Labels:             topology.kubernetes.io/region=region-1
                        topology.kubernetes.io/zone=zone-a 
    Annotations: ....

    For information about deployments with topology, see Deploy vSphere Container Storage Plug-in with Topology.

  • vSphere Container Storage Plug-in version 3.0 provides a new migration-datastore-url parameter in the vSphere configuration secret. The parameter allows to honor the default datastore feature of the in-tree vSphere plug-in.

Enable Migration of In-Tree vSphere Volumes to vSphere Container Storage Plug-in

Migrate the in-tree vSphere volumes to vSphere Container Storage Plug-in.

Prerequisites

Make sure to use compatible versions of vSphere and Kubernetes. See Supported Kubernetes Functionality.

Procedure

  1. Install vSphere Cloud Provider Interface (CPI).
    For more information, see Install vSphere Cloud Provider Interface.
  2. Install vSphere Container Storage Plug-in with csi-migration set to true.
    The following sample deployment YAML file uses version 2.4, but you can substitute it with a later version of your choice. https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/release-2.4/manifests/vanilla/vsphere-csi-driver.yaml.
     apiVersion: v1
      data:
        "csi-migration": "true"
      .........
      kind: ConfigMap
      metadata:
        name: internal-feature-states.csi.vsphere.vmware.com
        namespace: vmware-system-csi
  3. Install the admission webhook.
    vSphere Container Storage Plug-in does not support provisioning of a volume by specifying migration specific parameters in the StorageClass. These parameters are added by the vSphere Container Storage Plug-in translation library, and should not be used in the storage class directly.
    The Validating admission controller prevents you from creating or updating StorageClass using csi.vsphere.vmware.com as provisioner with these parameters:
    • csimigration
    • datastore-migrationparam
    • diskformat-migrationparam
    • hostfailurestotolerate-migrationparam
    • forceprovisioning-migrationparam
    • cachereservation-migrationparam
    • diskstripes-migrationparam
    • objectspacereservation-migrationparam
    • iopslimit-migrationparam

    In addition, the Validating admission controller prevents you from creating or updating StorageClass using kubernetes.io/vsphere-volume as provisioner with AllowVolumeExpansion set to true.

    As a prerequisite, you must install kubectl, openssl, and base64 commands on the system, from which you invoke admission webhook installation scripts.

    To deploy the admission webhook, download and execute the following file. If needed, substitute the version number with a version of choice.

    https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/release-2.4/manifests/vanilla/deploy-vsphere-csi-validation-webhook.sh

    ./deploy-vsphere-csi-validation-webhook.sh
    creating certs in tmpdir /var/folders/vy/_6dvxx7j5db9sq9n38qjymwr002gzv/T/tmp.ogj5ioIk 
    Generating a 2048 bit RSA private key
    ..........................................................................................................................................................+++
    ...........................................+++
    writing new private key to '/var/folders/vy/_6dvxx7j5db9sq9n38qjymwr002gzv/T/tmp.ogj5ioIk/ca.key'
    -----
    Generating RSA private key, 2048 bit long modulus
    ..............................................................+++
    ...........+++
    e is 65537 (0x10001)
    Signature ok
    subject=/CN=vsphere-webhook-svc.vmware-system-csi.svc
    Getting CA Private Key
    secret "vsphere-webhook-certs" deleted
    secret/vsphere-webhook-certs created
    service/vsphere-webhook-svc created
    validatingwebhookconfiguration.admissionregistration.k8s.io/validation.csi.vsphere.vmware.com created
    serviceaccount/vsphere-csi-webhook created
    role.rbac.authorization.k8s.io/vsphere-csi-webhook-role created
    rolebinding.rbac.authorization.k8s.io/vsphere-csi-webhook-role-binding created
    deployment.apps/vsphere-csi-webhook created
  4. On all control plane nodes, enable CSIMigration and CSIMigrationvSphere parameters on kube-controller and kubelet.
    Note: You don't need to perform Steps 4 through 6 if you use Kubernetes v1.21 and later. The Kubernetes v1.21 release deprecated CSIMigration and CSIMigrationvSphere feature flags. For more information, search the Kubernetes documentation site at https://kubernetes.io/docs/home/.
    CSIMigrationvSphere flag enables shims and translation logic to route volume operations from the vSphere in-tree volume plug-in to vSphere Container Storage Plug-in. It also supports falling back to in-tree vSphere plug-in if a node does not have vSphere Container Storage Plug-in installed and configured.

    CSIMigrationvSphere requires CSIMigration feature flag to be enabled. This flag enables the vSphere Container Storage Plug-in migration on the Kubernetes cluster.

    1. Update kube-controller-manager manifest file and add the following arguments.
      The file is available at /etc/kubernetes/manifests/kube-controller-manager.yaml.
      `- --feature-gates=CSIMigration=true,CSIMigrationvSphere=true`
    2. Update kubelet configuration file and add the following flags.
      The file is available at /var/lib/kubelet/config.yaml.
      featureGates:
           CSIMigration: true
           CSIMigrationvSphere: true
    3. Restart the kubelet on the control plane nodes using the following command.
      systemctl restart kubelet
    4. Verify that the kubelet is functioning correctly using the following command
      systemctl status kubelet
    5. For any issues with the kubelet, check the logs on the control plane node using the following command.
      journalctl -xe
  5. Enable CSIMigration and CSIMigrationvSphere feature flags on kubelet at all workload nodes.
    1. Before you change the configuration on the kubelet on each workload node, drain the nodes by removing running application workloads.
      The following is a node drain example.
      $ kubectl drain k8s-node1 --force --ignore-daemonsets
       node/k8s-node1 cordoned
       WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: default/vcppod; ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-gs7fr, kube-system/kube-proxy-rbjx4, kube-system/vsphere-csi-node-fh9f6
       evicting pod default/vcppod
         pod/vcppod evicted
         node/k8s-node1 evicted
    2. To enable migration on the workload nodes, update the kubelet configuration file and add the folowing flags. The file is available at /var/lib/kubelet/config.yaml.
      featureGates:
      CSIMigration: true
      CSIMigrationvSphere: true
    3. Restart the kubelet on the workload nodes using the following command.
      systemctl restart kubelet
    4. Verify that the kubelet is functioning correctly using the following command.
      systemctl status kubelet
    5. For any issues with the kubelet, check the logs on the workload node using the following command.
      journalctl -xe
    6. After you enable the migration, ensure the csinodes instance for the node is updated with the storage.alpha.kubernetes.io/migrated-plugins annotation.
      $ kubectl describe csinodes k8s-node1
        Name:               k8s-node1
        Labels:             <none>
        Annotations:        storage.alpha.kubernetes.io/migrated-plugins: kubernetes.io/vsphere-volume
        CreationTimestamp:  Wed, 29 Apr 2020 17:51:35 -0700
        Spec:
          Drivers:
            csi.vsphere.vmware.com:
              Node ID:  k8s-node1
        Events:         <none>
      Note:
      • Do not uncordon the workload node unless migrated plugins annotation on the CSINode object for the workload node does not list kubernetes.io/vsphere-volume.
      • If a node is uncordoned before it is migrated to vSphere Container Storage Plug-in, a volume is attached using the in-tree plugin. If there is a request for the volume to be attached to vSphere Container Storage Plug-in, it will fail as it is already attached to the workload VM using the in-tree plugin.
    7. Uncordon the node after the CSINode object for the node lists kubernetes.io/vsphere-volume as migrated-plugins.
      kubectl uncordon k8s-node1
      
    8. Repeat the above steps for all workload nodes in the Kubernetes cluster.
  6. (Optional) You can enable the CSIMigrationvSphereComplete flag if you enabled the vSphere Container Storage Plug-in migration on all nodes.
    CSIMigrationvSphereComplete helps you to stop registering the vSphere in-tree plug-in in kubelet and volume controllers. It also enables shims and translation logic to route volume operations from the vSphere in-tree plug-in to vSphere Container Storage Plug-in. The CSIMigrationvSphereComplete flag requires you to enable the CSIMigration and CSIMigrationvSphere feature flags. Also, you must install and configure vSphere Container Storage Plug-in on all nodes in the cluster.
  7. Verify that the vSphere in-tree PVCs and PVs are migrated to vSphere Container Storage Plug-in and the pv.kubernetes.io/migrated-to: csi.vsphere.vmware.com annotations are present on PVCs and PVs
    Annotations on PVCs:
    Annotations:   pv.kubernetes.io/bind-completed: yes
                    pv.kubernetes.io/bound-by-controller: yes
                    pv.kubernetes.io/migrated-to: csi.vsphere.vmware.com
                    volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume
    Annotations on PVs:
    Annotations:     kubernetes.io/createdby: vsphere-volume-dynamic-provisioner
                      pv.kubernetes.io/bound-by-controller: yes
                      pv.kubernetes.io/migrated-to: csi.vsphere.vmware.com
                      pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume

    After you enable the migration, vSphere Container Storage Plug-in creates a new in-tree vSphere volume. It can be identified by using the following annotations. The PV specification continues to hold the vSphere volume path. If you want to deactivate the migration, the provisioned volume can be used by the in-tree vSphere plug-in.

    Annotations on PVCs:
    Annotations:   pv.kubernetes.io/bind-completed: yes
                    pv.kubernetes.io/bound-by-controller: yes
                    volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com
    Annotations on PVs:
    Annotations:     pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com