This example demonstrates how to deploy a StatefulSet application to a TKG cluster on Supervisor.

Storage Class

Storage class exists with two editions to choose from. For this deployment we will use the *-latebinding edition.
kubectl get sc
NAME                              PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
zonal-ds-policy-105               csi.vsphere.vmware.com   Delete          Immediate              true                   17h
zonal-ds-policy-105-latebinding   csi.vsphere.vmware.com   Delete          WaitForFirstConsumer   true                   17h

Multi-Zoned Supervisor Topology

TKG cluster is provisioned on Supervisor across vSphere Zones.
kubectl get nodes -L topology.kubernetes.io/zone
NAME                                                            STATUS   ROLES                  AGE   VERSION            ZONE
test-cluster-e2e-script-105-m72sb-2dnsz                         Ready    control-plane,master   18h   v1.22.6+vmware.1   zone-1
test-cluster-e2e-script-105-m72sb-rmtjn                         Ready    control-plane,master   18h   v1.22.6+vmware.1   zone-2
test-cluster-e2e-script-105-m72sb-rvhb8                         Ready    control-plane,master   18h   v1.22.6+vmware.1   zone-3
test-cluster-e2e-script-105-nodepool-1-p86fm-6dfcdc77b7-fxm4s   Ready    <none>                 18h   v1.22.6+vmware.1   zone-1
test-cluster-e2e-script-105-nodepool-2-gx5gs-7cf4895b77-6wlb4   Ready    <none>                 18h   v1.22.6+vmware.1   zone-2
test-cluster-e2e-script-105-nodepool-3-fkkc9-856cd45985-d8nsl   Ready    <none>                 18h   v1.22.6+vmware.1   zone-3

StatefulSet

The StatefulSet ( sts.yaml) deploys the application to pods, each having a persistent identifier that it maintains across any rescheduling.
 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  serviceName: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: topology.kubernetes.io/zone 
                operator: In
                values:
                - zone-1
                - zone-2
                - zone-3
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - nginx
            topologyKey: topology.kubernetes.io/zone
      containers:
        - name: nginx
          image: gcr.io/google_containers/nginx-slim:0.8
          ports:
            - containerPort: 80
              name: web
          volumeMounts:
            - name: www
              mountPath: /usr/share/nginx/html
            - name: logs
              mountPath: /logs
  volumeClaimTemplates:
    - metadata:
        name: www
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: zonal-ds-policy-105-latebinding
        resources:
          requests:
            storage: 2Gi
    - metadata:
        name: logs
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: zonal-ds-policy-105-latebinding 
        resources:
          requests:
            storage: 1Gi

Deploy the Application

Deploy the StatefulSet application as follows. On successful deployment, application pods are scheduled across 3 vSphere Zones using wait for first consumer using the storage class with late binding volume mode.
  1. Deploy the StatefulSet.
    kubectl create -f sts.yaml
    statefulset.apps/web created
  2. Verify the StatefulSet.
    kubectl get statefulset
    NAME   READY   AGE
    web    3/3     112s
    
  3. Verify the pods.
    kubectl get pods -o wide
    NAME    READY   STATUS    RESTARTS   AGE    IP           NODE                                                            NOMINATED NODE   READINESS GATES
    web-0   1/1     Running   0          117s   172.16.1.2   test-cluster-e2e-script-105-nodepool-3-fkkc9-856cd45985-d8nsl   <none>           <none>
    web-1   1/1     Running   0          90s    172.16.2.2   test-cluster-e2e-script-105-nodepool-2-gx5gs-7cf4895b77-6wlb4   <none>           <none>
    web-2   1/1     Running   0          53s    172.16.3.2   test-cluster-e2e-script-105-nodepool-1-p86fm-6dfcdc77b7-fxm4s   <none>           <none>
  4. Verify pod scheduling across zones and late volume binding.
    kubectl get pv -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.claimRef.name}{"\t"}{.spec.nodeAffinity}{"\n"}{end}'
    pvc-7010597f-31cf-4ab1-bbd7-98ac04e0c603	www-web-2	{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"topology.kubernetes.io/zone","operator":"In","values":["zone-1"]}]}]}}
    pvc-921fadfc-df89-456d-a341-00f4117035f8	logs-web-0	{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"topology.kubernetes.io/zone","operator":"In","values":["zone-3"]}]}]}}
    pvc-bcb46a24-58cb-4ec7-a964-391fe80400fc	www-web-1	{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"topology.kubernetes.io/zone","operator":"In","values":["zone-2"]}]}]}}
    pvc-f51a44e5-ec19-4bec-b67a-3e34512049b8	www-web-0	{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"topology.kubernetes.io/zone","operator":"In","values":["zone-3"]}]}]}}
    pvc-fa68887a-31dd-4d9e-bb39-88653a9d80c9	logs-web-2	{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"topology.kubernetes.io/zone","operator":"In","values":["zone-1"]}]}]}}
    pvc-fc2cd6f7-b033-48ee-892d-df5318ec6f3e	logs-web-1	{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"topology.kubernetes.io/zone","operator":"In","values":["zone-2"]}]}]}}