This section provides information about deploying workloads with WaitForFirstConsumer mode in a topology-aware environment for file volumes.

Prerequisites

Enable topology in the native Kubernetes cluster in your vSphere environment. For more information, see Deploy vSphere Container Storage Plug-in with Topology

Procedure

  1. Create a StorageClass with the volumeBindingMode parameter set to WaitForFirstConsumer.
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: topology-aware-file-volume
    provisioner: csi.vsphere.vmware.com
    volumeBindingMode: WaitForFirstConsumer
  2. Create an application to use the StorageClass created previously.
    Instead of creating a volume immediately, the WaitForFirstConsumer setting instructs the volume provisioner to wait until a pod using the associated PVC runs through scheduling. In contrast with the Immediate volume binding mode, when the WaitForFirstConsumer setting is used, the Kubernetes scheduler drives the decision of which failure domain to use for volume provisioning using the pod policies.
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: web
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      serviceName: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: topology.csi.vmware.com/k8s-zone
                    operator: In
                    values:
                    - zone-a
                    - zone-b
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                  - key: app
                    operator: In
                    values:
                    - nginx
                topologyKey: topology.csi.vmware.com/k8s-zone
          containers:
            - name: nginx
              image: gcr.io/google_containers/nginx-slim:0.8
              ports:
                - containerPort: 80
                  name: web
              volumeMounts:
                - name: www
                  mountPath: /usr/share/nginx/html
                - name: logs
                  mountPath: /logs
      volumeClaimTemplates:
        - metadata:
            name: www
          spec:
            accessModes: [ "ReadWriteMany" ]
            storageClassName: topology-aware-file-volume
            resources:
              requests:
                storage: 2Gi
        - metadata:
            name: logs
          spec:
            accessModes: [ "ReadWriteMany" ]
            storageClassName: topology-aware-file-volume
            resources:
              requests:
                storage: 1Gi
  3. Verify that the statefulset is in the Running state and check that the pods are evenly distributed among the zone-a and zone-b.
    $  kubectl get statefulset
     NAME   READY   AGE
     web    2/2     3m51s
      
    $ kubectl get pods -o wide
    NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
    web-0                     1/1     Running   0          4m40s   10.244.3.21   k8s-node-2   <none>           <none>
    web-1                     1/1     Running   0          4m12s   10.244.4.25   k8s-node-1   <none>           <none>
      
    $ kubectl get node k8s-node-1 k8s-node-2 --show-labels
    NAME         STATUS   ROLES    AGE  VERSION   LABELS
    k8s-node-1   Ready    <none>   2d   v1.21.1   topology.csi.vmware.com/k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-a
    k8s-node-2   Ready    <none>   2d   v1.21.1   topology.csi.vmware.com/k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-b
    Note that the node affinity rules are not published on the PV.