This topic describes how to use the vSphere Container Storage Interface (CSI) Driver that is automatically installed to clusters by VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on vSphere.



Overview

vSphere Cloud Native Storage (CNS) provides comprehensive data management for stateful, containerized apps, enabling apps to survive restarts and outages. Stateful containers can use vSphere storage primitives such as standard volume, persistent volume, and dynamic provisioning, independent of VM and container lifecycle.

You can install vSphere CNS on TKGI-provisioned clusters by configuring TKGI to automatically install a vSphere CSI Driver. To enable automatic CSI driver installation on your clusters, see Storage in Installing TKGI on vSphere.

When automatic vSphere CSI Driver installation is enabled, your clusters use your tile Kubernetes Cloud Provider storage settings as the default vSphere CNS configuration.

The automatically deployed vSphere CSI Driver supports high availability (HA) configurations. HA support is automatically enabled on clusters with multiple control plane nodes and uses only one active CSI Controller.

Use the vSphere client to review your cluster storage volumes and their backing virtual disks, and to set a storage policy on your storage volumes or monitor policy compliance. vSphere storage backs up your cluster volumes.

For more information about VMware CNS, see Getting Started with VMware Cloud Native Storage.

For more information about using the Kubernetes CSI Driver, see Persistent Volumes in the Kubernetes documentation.

In TKGI, you can configure the vSphere CSI Driver to:


Note: If you have an existing cluster with a manually installed vSphere CSI driver and your administrator has enabled automatic vSphere CSI Driver installation, you must uninstall the manually installed vSphere CSI Driver from your cluster. For more information, see Uninstall a Manually Installed vSphere CSI Driver below.




Requirements and Limitations of the vSphere CSI Driver

For information about the supported features and the known limitations of the vSphere CSI Driver, see:


vSphere CSI Driver Supported Features and Requirements

The vSphere CSI Driver supports different features depending on driver version, environment and storage type.

TKGI supports only the following vSphere CSI Driver features:

  • Dynamic Block PV support*
  • Dynamic File PV support*
  • Dynamic Virtual Volume (vVOL) PV support
  • Encryption support via VMcrypt*
  • Enhanced Object Health in UI for vSAN Datastores
  • Kubernetes Multi-node Control Plane support
  • Static PV Provisioning
  • Topology-aware volume provisioning

    *For information on the usage limitations and environment and version requirements of these vSphere CSI Driver features, see Supported Kubernetes Functionality in Compatibility Matrices for vSphere Container Storage Plug-in in the VMware vSphere Container Storage Plug-in documentation.


For information on the vCenter, datastore, and cluster types supported by the vSphere CSI Driver, see vSphere Functionality Supported by vSphere Container Storage Plug-in in the VMware vSphere Container Storage Plug-in documentation.

For information on the scaling limitations of the vSphere CSI Driver, see Configuration Maximums for vSphere Container Storage Plug-in in the VMware vSphere Container Storage Plug-in documentation.


Unsupported Features and Limitations

vSphere Storage DRS, Manual Storage vMotion, and other VMware vSphere features are not supported by the vSphere Container Storage Plug-in and cannot be used by the TKGI clusters that use or migrate to the vSphere CSI Driver.

For more information on the limitations of the VMware vSphere Container Storage Plug-in, see vSphere Functionality Supported by vSphere Container Storage Plug-in in the VMware vSphere Container Storage Plug-in documentation.


Customize vSphere File Volumes

To create, modify or remove a customized vSphere file volume:

Prerequisites

To use file volumes, you must enable vSAN File Services in the vSphere vCenter. For information about enabling vSAN File Services, see Configure File Services in the VMware vSphere documentation.


Create a Cluster with Customized File Volume Parameters

To create a new cluster with a vSphere file volume:

  1. Create a JSON or YAML formatted volume configuration file containing the following:

    {
      "target_vsan_fileshare_datastore_urls": "DS-URLS",
      "net_permissions": [
        {
          "name": "PERMISSION-NAME",
          "ips": "IP-ADDRESS",
          "permissions": "PERMISSION",
          "rootsquash": "ACCESS-LEVEL"
        },
        {
          "name": "PERMISSION-NAME",
          "ips": "IP-ADDRESS",
          "permissions": "PERMISSION",
          "rootsquash": "ACCESS-LEVEL"
        }
      ]
    }
    

    Where:

    • DS-URLS is a comma-separated list of datastores for deploying file share volumes. For example: "ds:///vmfs/volumes/vsan:52635b9067079319-95a7473222c4c9cd/".
    • PERMISSION-NAME is your name for a NetPermission.
    • IP-ADDRESS is the IP range or IP subnet affected by a NetPermission restriction.
    • PERMISSION is the access permission to the file share volume for a NetPermission restriction.
    • ACCESS-LEVEL is the security access level for the file share volume for a NetPermission restriction.

    For information, see File Volume Configuration below.

  2. To create a cluster with attached file volumes:

    tkgi create-cluster CLUSTER-NAME --config-file CONFIG-FILE 
    

    Where:

    • CLUSTER-NAME is the name of your cluster.
    • CONFIG-FILE is the name of your configuration file.

    For example:

    $ tkgi create-cluster demo -e demo.cluster --plan Small --config-file ./conf1.json
    


Modify a Cluster with Customized File Volume Parameters

To modify an existing cluster with a vSphere file volume:

  1. Create a file volume configuration file. For information, see File Volume Configuration below.
  2. To update your cluster with file volumes:

    tkgi update-cluster CLUSTER-NAME --config-file CONFIG-FILE 
    

    Where:

    • CLUSTER-NAME is the name of your cluster.
    • CONFIG-FILE is the name of your configuration file.

WARNING: Update the configuration file only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.


Remove File Volume Parameters from a Cluster

To remove a vSphere file volume configuration from a cluster:

  1. Create a file volume configuration file containing either the disable_target_vsan_fileshare_datastore_urls or disable_net_permissions parameters set to true to deactivate an existing file volume parameter.

    For more information, see File Volume Configuration below.

  2. To remove the configured file volume parameter from your cluster:

    tkgi update-cluster CLUSTER-NAME --config-file CONFIG-FILE 
    

    Where:

    • CLUSTER-NAME is the name of your cluster.
    • CONFIG-FILE is the name of your configuration file.

WARNING: Update the configuration file only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.


File Volume Configuration

Create a JSON or YAML formatted File Volume configuration file to enable or deactivate vSphere file volume support.

For example:

  • The following configuration enables all File Volume features:

    {
      "target_vsan_fileshare_datastore_urls": "ds:///vmfs/volumes/vsan:52635b9067079319-95a7473222c4c9cd/",
      "net_permissions": [
        {
          "name": "demo1",
          "ips": "192.168.0.0/16",
          "permissions": "READ_WRITE",
          "rootsquash": false
        },
        {
          "name": "demo2",
          "ips": "10.0.0.0/8",
          "permissions": "READ_ONLY",
          "rootsquash": false
        }
      ]
    }
    
  • The following configuration deactivates File Volume features:

    {
      "disable_target_vsan_fileshare_datastore_urls": true,
      "disable_net_permissions": true
    }
    


File Volume DataStores Configuration

The following are accepted Datastore URLs parameters:

Name Type Description
disable_target_vsan_fileshare_datastore_urls Boolean Deactivate the target_vsan_fileshare_datastore_urls.
Values: true, false.
Default Value: false.
target_vsan_fileshare_datastore_urls string A comma separated list of datastores for deploying file share volumes.

File Volume NetPermissions Object Configuration

The following are accepted NetPermissions objects:

Name Type Description
net_permissions Array Properties defining a NetPermissions object.
disable_net_permissions Boolean Deactivate the net_permissions.
Values: true, false.
Default Value: false.

The following are supported NetPermissions object parameters:

Name Type Description
name string Name of the NetPermission object.
ips string IP range or IP subnet affected by the NetPermission restrictions.
Default Value: “*”.
permissions string Access permission to the file share volume.
Values: “READ_WRITE”, “READ_ONLY”, “NO_ACCESS”.
Default Value: “READ_WRITE”.
rootsquash Boolean Security access level for the file share volume.
Values: true, false.
Default Value: false.

For more information on NetPermissions object parameters, see Procedure in Create a Kubernetes Secret for vSphere Container Storage Plug-in.


Create or Use CNS Block Volumes

To dynamically provision a block volume using the vSphere CSI Driver:

  1. Create a vSphere Storage Class
  2. Create a PersistentVolumeClaim
  3. Create Workloads Using Persistent Volumes

For more information on vSphere CSI Driver configuration, see the example/vanilla-k8s-block-driver configuration for the CSI driver version you are using in vsphere-csi-driver in the VMware kubernetes-sigs GitHub repo.


Create a vSphere Storage Class

To create a vSphere Storage Class:

  1. Open vCenter.
  2. Open the vSAN Datastore Summary pane.

    vSAN Datastore Summary pane in vCenter

  3. Determine the datastoreurl value for your Datastore.

  4. Create the following YAML:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: demo-sts-storageclass
      annotations:
          storageclass.kubernetes.io/is-default-class: "true"
    provisioner: csi.vsphere.vmware.com
    allowVolumeExpansion: ALLOW-EXPANSION
    parameters:
      datastoreurl: "DATASTORE-URL"
    

    Where:

    • ALLOW-EXPANSION defines whether the cluster’s persistent volume size is either resizable or static. Set to true for resizable and false for static size.
    • DATASTORE-URL is the URL to your Datastore. For a non-vSAN datastore, the datastoreurl value looks like ds:///vmfs/volumes/5e66e525-8e46bd39-c184-005056ae28de/.

      For example:
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: demo-sts-storageclass
      annotations:
          storageclass.kubernetes.io/is-default-class: "true"
    provisioner: csi.vsphere.vmware.com
    allowVolumeExpansion: true
    parameters:
      datastoreurl: "ds:///vmfs/volumes/vsan:52d8eb4842dbf493-41523be9cd4ff7b7/"
    

For more information about StorageClass, see Storage Classes in the Kubernetes documentation.


Create a PersistentVolumeClaim

To create a Persistent Volume using the vSphere CSI Driver:

  1. Create a Storage Class. For more information, see Create a vSphere Storage Class below.
  2. To apply the StorageClass configuration:
    kubectl apply -f CONFIG-FILE
    

    Where CONFIG-FILE is the name of your StorageClass configuration file.

  3. Create the PersistentVolumeClaim configuration for the file volume. For information about configuring a PVC, see Persistent Volumes in the Kubernetes documentation.

    For example:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: example-vanilla-block-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
      storageClassName: example-vanilla-block-sc
    
  4. To apply the PVC configuration:

    kubectl apply -f CONFIG-FILE 
    

    Where CONFIG-FILE is the name of your PVC configuration file.


Create Workloads Using Persistent Volumes

  1. Create a Pod configuration file containing volumeMounts and volumes parameters.

    For example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: example-vanilla-block-pod
    spec:
      containers:
        - name: test-container
          image: gcr.io/google_containers/busybox:1.24
          command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html  && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"]
          volumeMounts:
            - name: test-volume
              mountPath: /mnt/volume1
      restartPolicy: Never
      volumes:
        - name: test-volume
          persistentVolumeClaim:
            claimName: example-vanilla-block-pvc
    

  2. To apply the Pod configuration to your workload:

    kubectl apply -f CONFIG-FILE 
    

    Where CONFIG-FILE is the name of your configuration file.

For more information and examples of Pod configurations, see the example configurations for the CSI driver version you are using in vsphere-csi-driver in the VMware kubernetes-sigs GitHub repo.


Customize a Cluster with vSphere Topology-Aware Volume Provisioning

TKGI supports the vSphere Container Storage Plug-in’s topology-aware volume provisioning features.

For more information on volume provisioning features, see Allowed Topologies in the Kubernetes documentation and Topology-Aware Volume Provisioning in the VMware vSphere Container Storage Plug-in documentation.


Topology Overview

TKGI supports clusters with topology-aware volume provisioning.

To create a cluster with topology-aware volume provisioning:

  1. Prepare for Topology
  2. See Guidelines and Best Practices for Deployment with Topology in Deploying vSphere Container Storage Plug-in with Topology in the VMware vSphere Container Storage Plug-in documentation.
  3. Create a Cluster with Topology


To manage a cluster configured with topology-aware volume provisioning:

  1. Prepare for Topology
  2. See Guidelines and Best Practices for Deployment with Topology in Deploying vSphere Container Storage Plug-in with Topology in the VMware vSphere Container Storage Plug-in documentation.
  3. Manage Clusters with Topology-Aware Volumes

Note: You cannot add topology-aware volume provisioning to an existing cluster within TKGI.


Prepare for Topology

Before creating a new cluster with Topology-aware volume provisioning:

  1. Verify your environment meets the requirements listed in Topology Limitations and Prerequisites below.
  2. Review the vSphere CSI Topology deployment recommendations. For more information, see Guidelines and Best Practices for Deployment with Topology in Deploying vSphere Container Storage Plug-in with Topology in the VMware vSphere Container Storage Plug-in documentation.
  3. Create vSphere Center categories and tags as described in Procedures in Deploying vSphere Container Storage Plug-in with Topology in the VMware vSphere Container Storage Plug-in documentation.

For more information on creating vSphere Center tags and categories, see Create, Edit, or Delete a Tag Category in the VMware vSphere documentation.


Topology Limitations and Prerequisites

In TKGI you can create a new cluster with topology-aware volume provisioning enabled. You cannot add topology-aware volume provisioning to an existing cluster.

TKGI support for Topology-aware volume provisioning requires:

  • The vSphere CSI Driver Integration option must be enabled on the TKGI tile. For more information, see Storage in installing TKGI on vSphere.

  • You have created vSphere CSI topology categories and tags in your vSphere environment. For more information, see Prepare for Topology below.

  • You have prepared your environment as described in the vSphere CSI Topology deployment recommendations. For more information, see Guidelines and Best Practices for Deployment with Topology in Deploying vSphere Container Storage Plug-in with Topology in the VMware vSphere Container Storage Plug-in documentation.

  • The topology zone tags you create on your vSphere Client must be consistent with the existing AZs created in BOSH. Create topology zone tags on your vSphere Client using only AZ names existing for BOSH.

  • The topology feature does not support clusters with a Compute Profile that includes AZ settings.


Create a Cluster with Topology

To create a new cluster with a vSphere Topology configuration:

  1. Create a JSON or YAML configuration file containing the following:

    {
      "csi_topology_labels": {
        "topology_categories": "REGION-TAG,ZONE-TAG"
      }
    }
    

    Where:

    For example:

    {
      "csi_topology_labels": {
        "topology_categories": "k8s-region,k8s-zone"
      }
    }
    

    For more information, see Guidelines and Best Practices for Deployment with Topology in Deploying vSphere Container Storage Plug-in with Topology in the VMware vSphere Container Storage Plug-in documentation.

  2. To create a cluster with Topology-aware volume provisioning:

    tkgi create-cluster CLUSTER-NAME --config-file CONFIG-FILE 
    

    Where:

    • CLUSTER-NAME is the name of your cluster.
    • CONFIG-FILE is the name of your configuration file.

    For example:

    $ tkgi create-cluster demo -e demo.cluster --plan Small --config-file ./conf1.json
    


Manage Clusters with Topology-Aware Volumes

As you manage your clusters with topology-aware volume provisioning enabled, note the following limitations on existing clusters.

When running tkgi update-cluster on a cluster created with a topology-aware volume:

  • You must use the same csi_topology_labels configuration that was used during cluster creation.

  • You cannot add or remove topology-aware volume provisioning from the cluster.



Customize and Manage vSphere CNS

To configure or manage your vSphere CSI Driver:


Configure CNS Data Centers

If your clusters are in a multi-data center environment, configure the data centers that must mount CNS storage for the clusters.

Note: You must configure CNS data centers when the Topology feature is enabled in a multi-data center environment.

To configure CNS data centers for a multi-data center environment:

  1. Create a JSON or YAML formatted configuration file containing the following:

    {
      "csi_datacenters": "DATA-CENTER-LIST"
    }
    

    Where:

    • DATA-CENTER-LIST is a comma-separated list of vCenter data centers that must mount your CNS storage. The default data center for a cluster is the data center defined on the TKGI tile in Kubernetes Cloud Provider > Datacenter Name.

    For example:

    {
      "csi_datacenters": "kubo-dc1,kubo-dc2"
    }
    

    For more information on the csi_datacenters parameter, see the description of datacenters in Procedure in Create a Kubernetes Secret for vSphere Container Storage Plug-in.

  2. To create a new cluster or update an existing cluster with your vCenter data centers:

    • To create a cluster:

      tkgi create-cluster CLUSTER-NAME --config-file CONFIG-FILE  
      

      Where:

      • CLUSTER-NAME is the name of your cluster.
      • CONFIG-FILE is the name of your configuration file.

      For example:

      $ tkgi create-cluster demo -e demo.cluster --plan Small --config-file ./conf1.json
      
    • To update an existing cluster:

      tkgi update-cluster CLUSTER-NAME --config-file CONFIG-FILE 
      

      Where:

      • CLUSTER-NAME is the name of your cluster.
      • CONFIG-FILE is the name of your configuration file.

      WARNING: Update the configuration file only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.



Switch From the Manually Installed vSphere CSI Driver to the Automatic CSI Driver

TKGI v1.14 does not support a manually installed vSphere CSI driver. If you have a cluster that uses a manually installed vSphere CSI driver, you must switch the cluster to the CSI driver deployed by TKGI.

Warning: If your environment includes Topology-Aware clusters, TKGI must be configured for the manually installed driver before starting your TKGI upgrade continuing through to the end of cluster upgrading.

To switch your clusters from the manually installed vSphere CSI driver to the auto-deployed driver:

  1. Prepare Before Upgrading TKGI
  2. Upgrade Your TKGI Tile
  3. Upgrade Your Clusters
  4. Remove a Manually Installed vSphere CSI Driver
  5. Manage Topology After Switching to the Automatically Deployed vSphere CSI Driver

Prepare Before Upgrading TKGI

Prepare your clusters and configurations before upgrading TKGI to v1.14.0:

  1. To upgrade the manually deployed vSphere CSI driver on a cluster to v2.5.1:

    1. Download the vSphere CSI Driver v2.5.1 YAML configuration file vsphere-csi-driver.yaml from the vSphere CSI Driver GitHub repository.
    2. Configure the configuration file for TKGI. For more information, see Customize Your CSI Driver Configuration in Manually Installing the vSphere CSI Driver in the TKGI v1.13 documentation.
    3. Change the "use-csinode-id" parameter value in the configuration file from "true" to "false":

        "use-csinode-id": "false"
      
    4. To apply your customized configuration:

      kubectl apply -f vsphere-csi-driver.yaml
      

    For more information, see Upgrade vSphere Container Storage Plug-in of a Version 2.3.0 or Later in Upgrading vSphere Container Storage Plug-in in the VMware vSphere Container Storage Plug-in documentation.

  2. If topology-aware volume provisioning is enabled on a cluster, create a TKGI configuration file for the cluster. For more information, see Manage Topology After Switching to the Automatically Deployed vSphere CSI Driver below.

Upgrade Your TKGI Tile

To upgrade your TKGI tile:

  1. Download and import the Tanzu Kubernetes Grid Integrated Edition v1.14.0 tile. For more information, see Download and Import Tanzu Kubernetes Grid Integrated Edition in Upgrading Tanzu Kubernetes Grid Integrated Edition.
  2. Download and import Stemcells. For more information, see Download and Import Stemcells in Upgrading Tanzu Kubernetes Grid Integrated Edition.
  3. Deselect the Upgrade All Clusters errand. For more information, see Verify Errand Configuration in Upgrading Tanzu Kubernetes Grid Integrated Edition.
  4. If your environment does not include Topology-Aware clusters, enable vSphere CSI Driver Integration on the TKGI tile. For more information, see Verify Other Configurations in Upgrading Tanzu Kubernetes Grid Integrated Edition.

    Warning: If your environment includes Topology-Aware clusters, TKGI must be configured for the manually installed driver before starting your TKGI upgrade continuing through to the end of cluster upgrading.

  5. Select Apply Changes. For more information, see Apply Changes to the Tanzu Kubernetes Grid Integrated Edition Tile in Upgrading Tanzu Kubernetes Grid Integrated Edition.
  6. Complete the After the Upgrade steps as usual.

Upgrade Your Clusters

To upgrade your TKGI clusters to v1.14.0:

  1. Upgrade your TKGI clusters to v1.14.0.

  2. If your environment includes Topology-Aware clusters, enable vSphere CSI Driver Integration on the TKGI tile. For more information, see Verify Other Configurations in Upgrading Tanzu Kubernetes Grid Integrated Edition.

  3. Switch your clusters to the automatically deployed vSphere CSI Driver. For more information, see Switch a Cluster to the Automatically Deployed vSphere CSI Driver.

Switch a Cluster to the Automatically Deployed vSphere CSI Driver

To switch a cluster to the automatically deployed vSphere CSI Driver:

  1. If topology-aware volume provisioning is enabled on your cluster, update your cluster using the topology configuration file you created in Prepare Before Upgrading TKGI above.
  2. Remove the manually installed CSI driver from the cluster. For more information, see Remove a Manually Installed vSphere CSI Driver below.
  3. To restart CSI jobs on all worker nodes:

    bosh -d DEPLOYMENT ssh worker "sudo monit restart csi-node"
    bosh -d DEPLOYMENT ssh worker "sudo monit restart csi-node-registrar"
    

    Where:

    • DEPLOYMENT is the name of the deployment.
  4. To verify that the CSI jobs on all control plane nodes are in a running state:

    bosh -d DEPLOYMENT ssh master "sudo monit summary | grep csi"
    

    Where:

    • DEPLOYMENT is the name of the deployment.
  5. If a CSI job is not in a running state, start the CSI job:

    bosh -d DEPLOYMENT ssh NODE-VM "sudo monit start JOB-NAME"
    

    Where:

    • DEPLOYMENT is the name of the deployment.
    • NODE-VM is the control plane node VM.
    • JOB-NAME is the name of the CSI job to start.


Remove a Manually Installed vSphere CSI Driver

If you have a cluster that uses a manually installed vSphere CSI Driver, and you upgrade the cluster after your administrator has enabled automatic vSphere CSI Driver Integration, remove the manually installed driver. While automatic vSphere CSI Driver Integration is enabled, TKGI enables the integrated driver for a cluster after upgrading it, and the cluster’s manually installed driver no longer functions.

To remove a manually installed vSphere CSI driver:

  1. Run the following command:

    kubectl delete -f vsphere-csi-driver.yaml
    


Manage Topology After Switching to the Automatically Deployed vSphere CSI Driver

After switching from a manually installed vSphere CSI Driver to the TKGI automatically deployed CSI Driver, the topology configuration must not be changed.

Configure topology based on the manually installed vSphere CSI Driver configuration:

  • Region and Zone Topology Labels:

    You must continue to use region, and zone labels if your manually deployed vSphere CSI Driver was configured using the legacy region, and zone topology configuration labels.

    Your revised cluster configuration file must include a csi_topology_labels parameter that assigns region and zone values.

    For example, if your vSphere Secret configuration for the manually installed vSphere CSI driver included the following:

    [Labels]
    region = k8s-region
    zone = k8s-zone
    

    Your new cluster configuration must include the following instead:

    { 
      "csi_topology_labels": { 
          "region: "k8s-region" 
          "zone": "k8s-zone" 
      } 
    }
    
  • topology_categories Topology Label:

    You must continue to use the topology_categories label if your manually deployed vSphere CSI Driver was configured using the topology_categories topology configuration label.

    Your revised cluster configuration file must include a csi_topology_labels parameter that assigns a topology_categories value.

    For example, if your vSphere Secret configuration for the manually installed vSphere CSI driver included the following:

    [Labels]
    topology-categories = "k8s-region, k8s-zone"
    
    

    Your new cluster configuration must include the following instead:

    { 
      "csi_topology_labels": { 
        "topology_categories": "k8s-region,k8s-zone" 
      } 
    }
    
  • Topology Deactivated:

    You must not activate topology if topology was not activated while using the manually deployed vSphere CSI Driver.


Migrate In-Tree vSphere Storage to the vSphere CSI Driver

Kubernetes’ support for in-tree vSphere storage volumes has been deprecated, and support will be removed in a future Kubernetes version.

The TKGI v1.17 upgrade process will automatically migrate your in-tree vSphere storage volumes to vSphere CSI. If you have existing clusters that use in-tree vSphere storage volumes, you can continue to use the volumes with your current version of TKGI, but VMware strongly recommends that you migrate your in-tree vSphere storage volumes to vSphere CSI volumes before upgrading to TKGI v1.17.

To manually migrate a cluster from an in-tree vSphere storage volume to a vSphere CSI Driver volume, see Migrate an In-Tree vSphere Storage Volume to the vSphere CSI Driver below.


Migrate an In-Tree vSphere Storage Volume to the vSphere CSI Driver

You can use tkgi update-cluster to migrate the PersistentVolume (PV) and PersistentVolumeClaim (PVC) on an existing TKGI cluster from the In-Tree vSphere Storage Driver to the automatically installed vSphere CSI Driver.

Warning: Due to Known Issues in the vSphere CSI Driver, VMware recommends that you migrate to the vSphere CSI Driver after upgrading to TKGI v1.14.1. For more information, see VMDKs Are Deleted during Migration from In-Tree Storage to CSI in the Release Notes.

Migrating a TKGI cluster from the In-Tree vSphere Storage Driver to the vSphere CSI Driver requires the following:

  • You must use TKGI CLI v1.12 or later.
  • TKGI automatic vSphere CSI Driver integration must be enabled. For information on enabling the vSphere CSI Driver Integration option on the TKGI tile, see Storage in Installing Tanzu Kubernetes Grid Integrated Edition on vSphere.
  • TKGI must be installed on vSphere v7.0 U2 or later.
  • The cluster must be a Linux TKGI cluster.


To migrate a cluster from an In-Tree vSphere Storage Driver to the vSphere CSI Driver:

  1. Upgrade your Kubernetes cluster to the current TKGI version of the TKGI tile.
  2. Review and complete all relevant steps documented in the vSphere CSI Migration documentation:

    Warning: Before migrating to the vSphere CSI driver, confirm your cluster’s volume storage is configured as described in Considerations for Migration of In-Tree vSphere Volumes.

  3. Create a configuration file containing the following:

    {
        "enable_csi_migration": "true"
    }
    
  4. To migrate your cluster to the vSphere CSI Driver:

    tkgi update-cluster CLUSTER-NAME --config-file CONFIG-FILE 
    

    Where:

    • CLUSTER-NAME is the name of your cluster.
    • CONFIG-FILE is the name of the config file you created in the preceding steps.

Note: You cannot migrate the PV or the PVC on a cluster from the vSphere CSI Driver to the In-Tree vSphere Storage Driver.

check-circle-line exclamation-circle-line close-line
Scroll to top icon