Refer to the example YAML to provision a TanzuKubernetesCluster across vSphere Zones using the v1alpha3 API.
vSphere Zones and Failure Domains
vSphere Zones provide a way to create highly available TKG clusters on Supervisor. If you are provisioning a TKG cluster across vSphere Zones, you must provide the failure domain for each node pool.
Each failure domain maps to a vSphere Zone which thereby will be associated with one vSphere cluster. Failure domains, also known as vSphere Fault Domains, are defined and managed by the vSphere administrator when creating vSphere Zones. The storage profile you use for the TKG cluster must be configured as zonal
. See Create a vSphere Storage Policy for TKG Service Clusters.
When you deploy pods with replicas to a TKG cluster on Supervisor, the pod instances are automatically spread across the vSphere Zones. You do not need to provide zone details while deploying POD on TKG cluster.
kubectl get vspherezones
kubectl get availabilityzones
Both commands are available to system:authenticated
users. vSphere Zones are Supervisor-scoped resources so you do not need to specify a namespace.
v1alpha3 Example: TKC Across vSphere Zones
The example YAML provisions a TKG cluster across vSphere Zones.
failureDomain
parameter for each nodePool. The value of the parameter is the name of the vSphere Zone.
apiVersion: run.tanzu.vmware.com/v1alpha3 kind: TanzuKubernetesCluster metadata: name: tkc-zoned namespace: tkg-cluster-ns spec: topology: controlPlane: replicas: 3 vmClass: guaranteed-medium storageClass: tkg2-storage-policy-zonal tkr: reference: name: v1.25.7---vmware.3-fips.1-tkg.1 nodePools: - name: nodepool-a01 replicas: 3 vmClass: guaranteed-medium storageClass: tkg-storage-policy-zonal failureDomain: az1 - name: nodepool-a02 replicas: 3 vmClass: guaranteed-medium storageClass: tkg-storage-policy-zonal failureDomain: az2 - name: nodepool-a03 replicas: 3 vmClass: guaranteed-medium storageClass: tkg-storage-policy-zonal failureDomain: az3 settings: storage: defaultClass: tkg-storage-policy-zonal network: cni: name: antrea services: cidrBlocks: ["198.51.100.0/12"] pods: cidrBlocks: ["192.0.2.0/16"] serviceDomain: cluster.local