In questo esempio viene illustrato come distribuire un'applicazione StatefulSet in un cluster TKG in Supervisore.

Classe di storage

La classe di storage esiste con due edizioni tra cui scegliere. Per questa distribuzione verrà utilizzata la versione *-latebinding.
kubectl get sc
NAME                              PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
zonal-ds-policy-105               csi.vsphere.vmware.com   Delete          Immediate              true                   17h
zonal-ds-policy-105-latebinding   csi.vsphere.vmware.com   Delete          WaitForFirstConsumer   true                   17h

Topologia Supervisore a più zone

Viene eseguito il provisioning del cluster TKG in Supervisore tra Zone vSphere.
kubectl get nodes -L topology.kubernetes.io/zone
NAME                                                            STATUS   ROLES                  AGE   VERSION            ZONE
test-cluster-e2e-script-105-m72sb-2dnsz                         Ready    control-plane,master   18h   v1.22.6+vmware.1   zone-1
test-cluster-e2e-script-105-m72sb-rmtjn                         Ready    control-plane,master   18h   v1.22.6+vmware.1   zone-2
test-cluster-e2e-script-105-m72sb-rvhb8                         Ready    control-plane,master   18h   v1.22.6+vmware.1   zone-3
test-cluster-e2e-script-105-nodepool-1-p86fm-6dfcdc77b7-fxm4s   Ready    <none>                 18h   v1.22.6+vmware.1   zone-1
test-cluster-e2e-script-105-nodepool-2-gx5gs-7cf4895b77-6wlb4   Ready    <none>                 18h   v1.22.6+vmware.1   zone-2
test-cluster-e2e-script-105-nodepool-3-fkkc9-856cd45985-d8nsl   Ready    <none>                 18h   v1.22.6+vmware.1   zone-3

StatefulSet

StatefulSet ( sts.yaml) distribuisce l'applicazione ai pod, ognuno caratterizzato da un identificatore persistente conservabile in caso di riprogrammazione.
 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  serviceName: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: topology.kubernetes.io/zone 
                operator: In
                values:
                - zone-1
                - zone-2
                - zone-3
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - nginx
            topologyKey: topology.kubernetes.io/zone
      containers:
        - name: nginx
          image: gcr.io/google_containers/nginx-slim:0.8
          ports:
            - containerPort: 80
              name: web
          volumeMounts:
            - name: www
              mountPath: /usr/share/nginx/html
            - name: logs
              mountPath: /logs
  volumeClaimTemplates:
    - metadata:
        name: www
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: zonal-ds-policy-105-latebinding
        resources:
          requests:
            storage: 2Gi
    - metadata:
        name: logs
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: zonal-ds-policy-105-latebinding 
        resources:
          requests:
            storage: 1Gi

Distribuire l'applicazione

Distribuire l'applicazione StatefulSet come indicato di seguito. Una volta completata correttamente la distribuzione, i pod delle applicazioni sono pianificati in 3 Zone vSphere utilizzando la funzione di attesa per il primo cliente utilizzando la classe di storage con la modalità volume del binding in ritardo.
  1. Distribuire StatefulSet
    kubectl create -f sts.yaml
    statefulset.apps/web created
  2. Verificare StatefulSet
    kubectl get statefulset
    NAME   READY   AGE
    web    3/3     112s
    
  3. Verificare i pod.
    kubectl get pods -o wide
    NAME    READY   STATUS    RESTARTS   AGE    IP           NODE                                                            NOMINATED NODE   READINESS GATES
    web-0   1/1     Running   0          117s   172.16.1.2   test-cluster-e2e-script-105-nodepool-3-fkkc9-856cd45985-d8nsl   <none>           <none>
    web-1   1/1     Running   0          90s    172.16.2.2   test-cluster-e2e-script-105-nodepool-2-gx5gs-7cf4895b77-6wlb4   <none>           <none>
    web-2   1/1     Running   0          53s    172.16.3.2   test-cluster-e2e-script-105-nodepool-1-p86fm-6dfcdc77b7-fxm4s   <none>           <none>
  4. Verificare la pianificazione del pod tra le zone e il volume del binding in ritardo.
    kubectl get pv -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.claimRef.name}{"\t"}{.spec.nodeAffinity}{"\n"}{end}'
    pvc-7010597f-31cf-4ab1-bbd7-98ac04e0c603	www-web-2	{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"topology.kubernetes.io/zone","operator":"In","values":["zone-1"]}]}]}}
    pvc-921fadfc-df89-456d-a341-00f4117035f8	logs-web-0	{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"topology.kubernetes.io/zone","operator":"In","values":["zone-3"]}]}]}}
    pvc-bcb46a24-58cb-4ec7-a964-391fe80400fc	www-web-1	{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"topology.kubernetes.io/zone","operator":"In","values":["zone-2"]}]}]}}
    pvc-f51a44e5-ec19-4bec-b67a-3e34512049b8	www-web-0	{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"topology.kubernetes.io/zone","operator":"In","values":["zone-3"]}]}]}}
    pvc-fa68887a-31dd-4d9e-bb39-88653a9d80c9	logs-web-2	{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"topology.kubernetes.io/zone","operator":"In","values":["zone-1"]}]}]}}
    pvc-fc2cd6f7-b033-48ee-892d-df5318ec6f3e	logs-web-1	{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"topology.kubernetes.io/zone","operator":"In","values":["zone-2"]}]}]}}