For file volumes, the provisioned Persistent Volume will not have node affinity rules published on it. This is to allow applications in a given availability zone to be able to access file volumes in other availability zones. Even if you choose a zone for the file volumes, applications across different availability zones can still access these file volumes.

To deploy workloads with Immediate binding mode in a topology-aware environment, you must specify zone parameters in the StorageClass.

Prerequisites

Enable topology in the native Kubernetes cluster in your vSphere environment. For more information, see Deploy vSphere Container Storage Plug-in with Topology.

Procedure

  1. Create a StorageClass with Immediate volume binding mode.
    When you do not specify the volume binding mode, it is Immediate by default.
    You can define network permissions in the VSPHERE_CSI_CONFIG secret to restrict volume provisioning only in specific networks. To define network permissions, see Create a Kubernetes Secret for vSphere Container Storage Plug-in.
    You can also specify zone parameters. In the following example, the StorageClass can provision volumes on either zone-a or zone-b.
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: example-multi-zones-sc
    provisioner: csi.vsphere.vmware.com
    allowedTopologies:
      - matchLabelExpressions:
          - key: topology.csi.vmware.com/k8s-region
            values:
              - region-1
          - key: topology.csi.vmware.com/k8s-zone
            values:
              - zone-a
              - zone-b
    parameters:
      csi.storage.k8s.io/fstype: "nfs4"
  2. Create a PersistentVolumeClaim to use the StorageClass.
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: example-multi-zones-pvc
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 100Mi
      storageClassName: example-multi-zones-sc
  3. Verify whether the PVC is bound.
    $ kubectl get pvc example-multi-zones-pvc
    NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             AGE
    example-multi-zones-pvc   Bound    pvc-012cc523-03f0-45ea-9213-883362436591   100Mi      RWX            example-multi-zones-sc   3s
  4. Verify that the PV that got provisioned does not have any node affinity rules on it.
    Name:            pvc-012cc523-03f0-45ea-9213-883362436591
    Labels:          <none>
    Annotations:     pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
    Finalizers:      [kubernetes.io/pv-protection]
    StorageClass:    example-multi-zones-sc
    Status:          Bound
    Claim:           default/example-multi-zones-pvc
    Reclaim Policy:  Delete
    Access Modes:    RWX
    VolumeMode:      Filesystem
    Capacity:        100Mi
    Node Affinity:   <none>
    Message:        
    Source:
        Type:              CSI (a Container Storage Interface (CSI) volume source)
        Driver:            csi.vsphere.vmware.com
        FSType:            nfs4
        VolumeHandle:      file:fd60964d-d956-42bf-8fe5-37534dc4861a
        ReadOnly:          false
        VolumeAttributes:      storage.kubernetes.io/csiProvisionerIdentity=1704350366618-902-csi.vsphere.vmware.com
                               type=vSphere CNS File Volume
    Events:                <none>
  5. Create an application to use the PVC.
    apiVersion: v1
    kind: Pod
    metadata:
      name: example-multi-zones-pod
    spec:
      containers:
        - name: test-container
          image: gcr.io/google_containers/busybox:1.24
          command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html  && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"]
          volumeMounts:
            - name: test-volume
              mountPath: /mnt/volume1
      restartPolicy: Never
      volumes:
        - name: test-volume
          persistentVolumeClaim:
            claimName: example-multi-zones-pvc

Results

The pod may or may not get scheduled in the zone where the volume has been provisioned.
$ kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
example-multi-zones-pod   1/1     Running   0          3m53s   10.244.6.34   k8s-node-3   <none>           <none>
  
$ kubectl get node k8s-node-3 --show-labels
NAME         STATUS   ROLES    AGE  VERSION   LABELS
k8s-node-3   Ready    <none>   2d   v1.21.1   topology.csi.vmware.com/k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-c