This topic describes how to use the vSphere Container Storage Interface (CSI) Driver that is automatically installed to clusters by VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on vSphere.

If your TKGI environment is on vSphere and your administrator has not enabled automatic vSphere CSI Driver installation, you must manually install a vSphere CSI Driver on your clusters. For more information, see Manually Installing the vSphere CSI Driver.



Overview

vSphere Cloud Native Storage (CNS) provides comprehensive data management for stateful, containerized apps, enabling apps to survive restarts and outages. Stateful containers can use vSphere storage primitives such as standard volume, persistent volume, and dynamic provisioning, independent of VM and container lifecycle.

You can install vSphere CNS on TKGI-provisioned clusters by configuring TKGI to automatically install a vSphere CSI Driver. To enable automatic CSI driver installation on your clusters, see Storage in Installing TKGI on vSphere.

When automatic vSphere CSI Driver installation is enabled, your clusters use your tile Kubernetes Cloud Provider storage settings as the default vSphere CNS configuration.

You can customize, deploy and manage vSphere CNS volumes using the vSphere CSI Driver:

The automatically deployed vSphere CSI Driver supports high availability (HA) configurations. HA support is automatically enabled on clusters with multiple control plane nodes and uses only one active CSI Controller.

Use the vSphere client to review your cluster storage volumes and their backing virtual disks, and to set a storage policy on your storage volumes or monitor policy compliance. vSphere storage backs up your cluster volumes.

Note: If you have an existing cluster with a manually installed vSphere CSI driver and your administrator has enabled automatic vSphere CSI Driver installation, you must uninstall the manually installed vSphere CSI Driver from your cluster. For more information, see Uninstall a Manually Installed vSphere CSI Driver below.

For more information about VMware CNS, see Getting Started with VMware Cloud Native Storage.

For more information about using the Kubernetes CSI Driver, see Persistent Volumes in the Kubernetes documentation.



Requirements and Limitations of the vSphere CSI Driver

For information about the supported features and the known limitations of the vSphere CSI Driver, see:


vSphere CSI Driver Supported Features and Requirements

The vSphere CSI Driver supports different features depending on driver version, environment and storage type.

TKGI supports only the following vSphere CSI Driver features:

  • Enhanced Object Health in UI for vSAN Datastores
  • Dynamic Block PV support*
  • Dynamic Virtual Volume (vVOL) PV support
  • Static PV Provisioning
  • Kubernetes Multi-node Control Plane support
  • Encryption support via VMcrypt*
  • Dynamic File PV support*

    *For information on the usage limitations and environment and version requirements of these vSphere CSI Driver features, see Supported Kubernetes Functionality in Compatibility Matrices for vSphere Container Storage Plug-in in the VMware vSphere Container Storage Plug-in documentation.


For information on the vCenter, datastore, and cluster types supported by the vSphere CSI Driver, see vSphere Functionality Supported by vSphere Container Storage Plug-in in the VMware vSphere Container Storage Plug-in documentation.

For information on the scaling limitations of the vSphere CSI Driver, see Configuration Maximums for vSphere Container Storage Plug-in in the VMware vSphere Container Storage Plug-in documentation.


Unsupported Features and Limitations

vSphere Storage DRS, Manual Storage vMotion, and other VMware vSphere features are not supported by the vSphere Container Storage Plug-in and cannot be used by the TKGI clusters that use or migrate to the vSphere CSI Driver.

For more information on the limitations of the VMware vSphere Container Storage Plug-in, see vSphere Functionality Supported by vSphere Container Storage Plug-in in the VMware vSphere Container Storage Plug-in documentation.


Customize vSphere File Volumes

To create, modify or remove a customized vSphere file volume:

Prerequisites

To use file volumes, you must enable vSAN File Services in the vSphere vCenter. For information about enabling vSAN File Services, see Configure File Services in the VMware vSphere documentation.

Create a Cluster With Customized File Volume Parameters

To create a new cluster with a vSphere file volume:

  1. Create a file volume configuration file. For information, see File Volume Configuration below.
  2. To create a cluster with attached file volumes:

    tkgi create-cluster CLUSTER-NAME --config-file CONFIG-FILE 
    

    Where:

    • CLUSTER-NAME is the name of your cluster.
    • CONFIG-FILE is the name of your config file.

    For example:

     $ tkgi create-cluster demo -e demo.cluster –plan Small –config-file ./conf1.json 

Modify a Cluster With Customized File Volume Parameters

To modify an existing cluster with a vSphere file volume:

  1. Create a file volume configuration file. For information, see File Volume Configuration below.
  2. To update your cluster with file volumes:

    tkgi update-cluster CLUSTER-NAME --config-file CONFIG-FILE 
    

    Where:

    • CLUSTER-NAME is the name of your cluster.
    • CONFIG-FILE is the name of your config file.

Remove File Volume Parameters from a Cluster

To remove a vSphere file volume configuration from a cluster:

  1. Create a file volume configuration file containing either the disable_target_vsan_fileshare_datastore_urls or disable_net_permissions parameters set to true to deactivate an existing file volume parameter. For information, see File Volume Configuration below.

    For example:

    {
        "disable_target_vsan_fileshare_datastore_urls": true,
        "disable_net_permissions": true
    }
    
  2. To remove the configured file volume parameter from your cluster:

    tkgi update-cluster CLUSTER-NAME --config-file CONFIG-FILE 
    

    Where:

    • CLUSTER-NAME is the name of your cluster.
    • CONFIG-FILE is the name of your config file.

File Volume Configuration

To customize a vSphere file volume, create a JSON or YAML formatted file volume configuration file using the supported file volume parameters below.

For example:

{
  "target_vsan_fileshare_datastore_urls": "ds:///vmfs/volumes/vsan:52635b9067079319-95a7473222c4c9cd/",
  "net_permissions": [
    {
      "name": "demo1",
      "ips": "192.168.0.0/16",
      "permissions": "READ_WRITE",
      "rootsquash": false
    },
    {
      "name": "demo2",
      "ips": "10.0.0.0/8",
      "permissions": "READ_ONLY",
      "rootsquash": false
    }
  ]
}

The following are accepted File Volume configuration file parameters:

Name Type Description
target_vsan_fileshare_datastore_urls string A comma separated list of datastores for deploying file share volumes.
disable_target_vsan_fileshare_datastore_urls Boolean Deactivate the target_vsan_fileshare_datastore_urls.
Values: true, false.
Default Value: false.
net_permissions Array Properties defining a NetPermissions object.
disable_net_permissions Boolean Deactivate the net_permissions.
Values: true, false.
Default Value: false.

The following are supported NetPermission object configuration file parameters:

Name Type Description
name string Name of the NetPermission object.
ips string IP range or IP subnet affected by the NetPermission restrictions.
Default Value: “*”.
permissions string Access permission to the file share volume.
Values: “READ_WRITE”, “READ_ONLY”, “NO_ACCESS”.
Default Value: “READ_WRITE”.
rootsquash Boolean Security access level for the file share volume.
Values: true, false.
Default Value: false.

Create or Use CNS Block Volumes

To dynamically provision a block volume using the vSphere CSI Driver:

  1. Create a vSphere Storage Class
  2. Create a PersistentVolumeClaim
  3. Create Workloads Using Persistent Volumes

For more information on vSphere CSI Driver configuration, see the example/vanilla-k8s-block-driver configuration for the CSI driver version you are using in vsphere-csi-driver in the VMware kubernetes-sigs GitHub repo.

Create a vSphere Storage Class

To create a vSphere Storage Class:

  1. Open vCenter.
  2. Open the vSAN Datastore Summary pane.

    vSAN Datastore Summary pane in vCenter

  3. Determine the datastoreurl value for your Datastore.

  4. Create the following YAML:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: demo-sts-storageclass
      annotations:
          storageclass.kubernetes.io/is-default-class: "true"
    provisioner: csi.vsphere.vmware.com
    allowVolumeExpansion: ALLOW-EXPANSION
    parameters:
      datastoreurl: "DATASTORE-URL"
    

    Where:

    • ALLOW-EXPANSION defines whether the cluster’s persistent volume size is either resizable or static. Set to true for resizable and false for static size.
    • DATASTORE-URL is the URL to your Datastore. For a non-vSAN datastore, the datastoreurl value looks like ds:///vmfs/volumes/5e66e525-8e46bd39-c184-005056ae28de/.

    For example:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: demo-sts-storageclass
      annotations:
          storageclass.kubernetes.io/is-default-class: "true"
    provisioner: csi.vsphere.vmware.com
    allowVolumeExpansion: true
    parameters:
      datastoreurl: "ds:///vmfs/volumes/vsan:52d8eb4842dbf493-41523be9cd4ff7b7/"
    

For more information about StorageClass, see Storage Classes in the Kubernetes documentation.

Create a PersistentVolumeClaim

To create a Persistent Volume using the vSphere CSI Driver:

  1. Create a Storage Class. For more information, see Create a vSphere Storage Class below.
  2. To apply the StorageClass configuration:
    kubectl apply -f CONFIG-FILE
    

    Where CONFIG-FILE is the name of your StorageClass configuration file.

  3. Create the PersistentVolumeClaim configuration for the file volume. For information about configuring a PVC, see Persistent Volumes in the Kubernetes documentation.

    For example:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: example-vanilla-block-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
      storageClassName: example-vanilla-block-sc
    
  4. To apply the PVC configuration:

    kubectl apply -f CONFIG-FILE 
    

    Where CONFIG-FILE is the name of your PVC configuration file.

Create Workloads Using Persistent Volumes

  1. Create a Pod configuration file containing volumeMounts and volumes parameters.

    For example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: example-vanilla-block-pod
    spec:
      containers:
        - name: test-container
          image: gcr.io/google_containers/busybox:1.24
          command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html  && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"]
          volumeMounts:
            - name: test-volume
              mountPath: /mnt/volume1
      restartPolicy: Never
      volumes:
        - name: test-volume
          persistentVolumeClaim:
            claimName: example-vanilla-block-pvc
    

  2. To apply the Pod configuration to your workload:

    kubectl apply -f CONFIG-FILE 
    

    Where CONFIG-FILE is the name of your configuration file.

For more information and examples of Pod configurations, see the example configurations for the CSI driver version you are using in vsphere-csi-driver in the VMware kubernetes-sigs GitHub repo.

Uninstall a Manually Installed vSphere CSI Driver

If your administrator has enabled automatic vSphere CSI Driver Integration and you have a cluster that uses a manually installed vSphere CSI Driver, the manually installed driver will no longer function after upgrading the cluster.

To uninstall a manually installed CSI driver:

  1. Confirm your TKGI administrator has enabled automatic vSphere CSI Driver Integration on the TKGI tile.
  2. Upgrade your Kubernetes cluster to the TKGI version of the TKGI tile.
  3. Remove the manually installed CSI driver from the cluster. For more information, see Remove a Manually Installed vSphere CSI Driver below.
  4. To restart CSI jobs on all worker nodes:

    bosh -d DEPLOYMENT ssh worker "sudo monit restart csi-node"
    bosh -d DEPLOYMENT ssh worker "sudo monit restart csi-node-registrar"
    

    Where:

    • DEPLOYMENT is the name of the deployment.
  5. To verify that the CSI jobs on all control plane nodes are in a running state:

    bosh -d DEPLOYMENT ssh master "sudo monit summary | grep csi"
    

    Where:

    • DEPLOYMENT is the name of the deployment.
  6. If a CSI job is not in a running state, start the CSI job:

    bosh -d DEPLOYMENT ssh NODE-VM "sudo monit start JOB-NAME"
    

    Where:

    • DEPLOYMENT is the name of the deployment.
    • NODE-VM is the control plane node VM.
    • JOB-NAME is the name of the CSI job to start.

Remove a Manually Installed vSphere CSI Driver

If you have a cluster that uses a manually installed vSphere CSI Driver, and you upgrade the cluster after your administrator has enabled automatic vSphere CSI Driver Integration, you should remove the manually installed driver. While automatic vSphere CSI Driver Integration is enabled, TKGI enables the integrated driver for a cluster after upgrading it, and the cluster’s manually installed driver no longer functions.

To remove a manually installed vSphere CSI driver:

  1. Run the following command:

    kubectl delete -f vsphere-csi-driver.yaml
    

Migrate an In-Tree vSphere Storage Volume to the vSphere CSI Driver

You can use tkgi update-cluster to migrate the PersistentVolume (PV) and PersistentVolumeClaim (PVC) on an existing TKGI cluster from the In-Tree vSphere Storage Driver to the automatically installed vSphere CSI Driver.

Warning: Due to Known Issues in the vSphere CSI Driver, VMware recommends that you migrate to the vSphere CSI Driver only after upgrading to TKGI v1.13.6 or later.
For more information, see VMDKs Are Deleted during Migration from In-Tree Storage to CSI in the Release Notes.

Migrating a TKGI cluster from the In-Tree vSphere Storage Driver to the vSphere CSI Driver requires the following:

  • You must use TKGI CLI v1.12 or later.
  • TKGI automatic vSphere CSI Driver integration must be enabled. For information on enabling the vSphere CSI Driver Integration option on the TKGI tile, see Storage in Installing Tanzu Kubernetes Grid Integrated Edition on vSphere.
  • TKGI must be installed on vSphere v7.0 U2 or later.
  • The cluster must be a Linux TKGI cluster.


To migrate a cluster from an In-Tree vSphere Storage Driver to the vSphere CSI Driver:

1. Upgrade your Kubernetes cluster to the current TKGI version of the TKGI tile.
1. Review and complete all relevant steps documented in the vSphere CSI Migration documentation:
* Prerequisites for Installing the vSphere Container Storage Plug-in
* Migrating In-Tree vSphere Volumes to vSphere Container Storage Plug-in
* vSphere Container Storage Plug-in Upgrade Considerations and Guidelines

<p class="note warning"><strong>Warning</strong>: Before migrating to the vSphere CSI driver, 
confirm your cluster's volume storage is configured as described in 
<a href="https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-968D421F-D464-4E22-8127-6CB9FF54423F.html#considerations-for-migration-of-intree-vsphere-volumes-0">
Considerations for Migration of In-Tree vSphere Volumes</a>.
</p>
  1. Create a configuration file containing the following:

    {
        "enable_csi_migration": "true"
    }
    
  2. To migrate your cluster to the vSphere CSI Driver:

    tkgi update-cluster CLUSTER-NAME --config-file CONFIG-FILE 
    

    Where:

    • CLUSTER-NAME is the name of your cluster.
    • CONFIG-FILE is the name of the config file you created in the preceding steps.

Note: You cannot migrate the PV or the PVC on a cluster from the vSphere CSI Driver to the In-Tree vSphere Storage Driver.

check-circle-line exclamation-circle-line close-line
Scroll to top icon