To enable multiple availability zones for workload clusters on vSphere, two new custom resource definitions (CRD) have been introduced in Cluster API Provider vSphere (CAPV).
VSphereFailureDomain
CRD captures the region/zone specific tagging information and the topology definition which includes the vSphere datacenter, cluster, host, and datastore information.VSphereDeploymentZone
CRD captures the association of a VSphereFailureDomain
with placement constraint information for the Kubernetes node.NoteThis feature is in the unsupported Technical Preview state; see TKG Feature States.
The configurations in this topic spread the Kubernetes control plane and worker nodes across vSphere objects, namely hosts and compute clusters and datacenters.
The example in this section shows how to achieve multiple availability zones by spreading nodes across multiple compute clusters.
Create the custom resources for defining the region and zones.
spec.region
, spec.zone
, and spec.topology
must match what you have configured in vCenter.VSphereDeploymentZone
objects, the spec.failuredomain
value must match one of the metadata.name
values of the VSphereFailureDomain
definitionsspec.server
value in the VSphereDeploymentZone
objects must match the vCenter server address (IP or FQDN) entered for VCENTER SERVER in the installer interface IaaS Provider pane or the VSPHERE_SERVER
setting in the management cluster configuration file.metadata.name
values must be all lowercase.To spread the Kubernetes nodes for a workload cluster across multiple compute clusters within a datacenter you must create custom resources. This example describes 3 deployment zones named us-west-1a
, us-west-1b
and us-west-1c
, with each one being a compute cluster with its network and storage parameters.
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereFailureDomain
metadata:
name: us-west-1a
spec:
region:
name: us-west-1
type: Datacenter
tagCategory: k8s-region
zone:
name: us-west-1a
type: ComputeCluster
tagCategory: k8s-zone
topology:
datacenter: dc0
computeCluster: cluster1
datastore: ds-c1
networks:
- net1
- net2
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereFailureDomain
metadata:
name: us-west-1b
spec:
region:
name: us-west-1
type: Datacenter
tagCategory: k8s-region
zone:
name: us-west-1b
type: ComputeCluster
tagCategory: k8s-zone
topology:
datacenter: dc0
computeCluster: cluster2
datastore: ds-c2
networks:
- net3
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereFailureDomain
metadata:
name: us-west-1c
spec:
region:
name: us-west-1
type: Datacenter
tagCategory: k8s-region
zone:
name: us-west-1c
type: ComputeCluster
tagCategory: k8s-zone
topology:
datacenter: dc0
computeCluster: cluster3
datastore: ds-c3
networks:
- net4
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereDeploymentZone
metadata:
name: us-west-1a
spec:
server: VSPHERE_SERVER
failureDomain: us-west-1a
placementConstraint:
resourcePool: pool1
folder: foo
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereDeploymentZone
metadata:
name: us-west-1b
spec:
server: VSPHERE_SERVER
failureDomain: us-west-1b
placementConstraint:
resourcePool: pool2
folder: bar
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereDeploymentZone
metadata:
name: us-west-1c
spec:
server: VSPHERE_SERVER
failureDomain: us-west-1c
placementConstraint:
resourcePool: pool3
folder: baz
Where VSPHERE_SERVER
is the IP address or FQDN of your vCenter server.
If different compute clusters have identically-named resource pools, set the VSphereDeploymentZone
objects’ spec.placementConstraint.resourcePool
to a full resource path, not just the name.
NoteFor VsphereDeploymentZone objects,
spec.placementConstraint
is optional.
Tag the vSphere objects.
From the first VSphereFailureDomain
CR, named us-west-1a
, use govc
to apply the following tags to the datacenter dc0
and the compute cluster cluster1
.
govc tags.attach -c k8s-region us-west-1 /dc0
govc tags.attach -c k8s-zone us-west-1a /dc0/host/cluster1
Similarly, perform the following tagging operations for the other compute clusters.
govc tags.attach -c k8s-zone us-west-1b /dc0/host/cluster2
govc tags.attach -c k8s-zone us-west-1c /dc0/host/cluster3
You can skip this step if spec.region.autoConfigure
and spec.zone.autoConfigure
are set to true
when creating the VSphereFailureDomain
CRs.
For the next steps to deploy the cluster, see Deploy a Workload Cluster with Nodes Spread Across Availability Zones.
The example in this section spreads workload cluster nodes across 3 different host groups in a single cluster.
In vCenter Server, create Host groups, for example rack1
, and VM groups, for example rack1-vm-group
, for each failure domain.
Alternatively, you can use govc
to create host and VM groups by running commands similar to the following, without having to create a dummy VM:
govc cluster.group.create -cluster=RegionA01-MGMT -name=rack1 -host esx-01a.corp.tanzu esx-02a.corp.tanzu
govc cluster.group.create -cluster=RegionA01-MGMT -name=rack1-vm-group -vm
Add affinity rules between the created VM groups and Host groups so that the VMs in the VM group must run on the hosts in the created Host group.
Create the VSphereFailureDomain
and VSphereDeploymentZone
custom resources in a file vsphere-zones.yaml
.
spec.region
, spec.zone
, and spec.topology
must match what you have configured in vCenter.VSphereDeploymentZone
objects, the spec.failuredomain
value must match one of the metadata.name
values of the VSphereFailureDomain
definitionsspec.server
value in the VSphereDeploymentZone
objects must match the vCenter server address (IP or FQDN) entered for VCENTER SERVER in the installer interface IaaS Provider pane or the VSPHERE_SERVER
setting in the management cluster configuration file.metadata.name
values must be all lowercase.For example, the following vsphere-zones.yaml
files define three zones within a region room1
, where each zone is a rack of hosts within the same cluster.
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereFailureDomain
metadata:
name: rack1
spec:
region:
name: room1
type: ComputeCluster
tagCategory: k8s-region
zone:
name: rack1
type: HostGroup
tagCategory: k8s-zone
topology:
datacenter: dc0
computeCluster: cluster1
hosts:
vmGroupName: rack1-vm-group
hostGroupName: rack1
datastore: ds-r1
networks:
- net1
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereFailureDomain
metadata:
name: rack2
spec:
region:
name: room1
type: ComputeCluster
tagCategory: k8s-region
zone:
name: rack2
type: HostGroup
tagCategory: k8s-zone
topology:
datacenter: dc0
computeCluster: cluster1
hosts:
vmGroupName: rack2-vm-group
hostGroupName: rack2
datastore: ds-r2
networks:
- net2
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereFailureDomain
metadata:
name: rack3
spec:
region:
name: room1
type: ComputeCluster
tagCategory: k8s-region
zone:
name: rack3
type: HostGroup
tagCategory: k8s-zone
topology:
datacenter: dc0
computeCluster: cluster1
hosts:
vmGroupName: rack3-vm-group
hostGroupName: rack3
datastore: ds-c3
networks:
- net3
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereDeploymentZone
metadata:
name: rack1
spec:
server: VSPHERE_SERVER
failureDomain: rack1
placementConstraint:
resourcePool: pool1
folder: foo
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereDeploymentZone
metadata:
name: rack2
spec:
server: VSPHERE_SERVER
failureDomain: rack2
placementConstraint:
resourcePool: pool2
folder: bar
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereDeploymentZone
metadata:
name: rack3
spec:
server: VSPHERE_SERVER
failureDomain: rack3
placementConstraint:
resourcePool: pool3
folder: baz
Where VSPHERE_SERVER
is the IP address or FQDN of your vCenter server.
Apply the CRs file to create the VSphereFailureDomain
and VSphereDeploymentZone
objects:
kubectl apply -f vsphere-zones.yaml
Use govc
to create tag categories and tags for regions and zones, to apply to the compute clusters and hosts listed in your VSphereFailureDomain
CRs.
govc tags.category.create -t ClusterComputeResource k8s-region
govc tags.create -c k8s-region room1
Repeat for all regions:
govc tags.create -c k8s-region REGION
govc tags.category.create -t HostSystem k8s-zone
govc tags.create -c k8s-zone rack1
Repeat for all zones:
govc tags.create -c k8s-zone ZONE
Alternatively, you can perform the tag operations in this and the following steps from the Tags & Custom Attributes pane in vCenter.
Attach the region tags to all the compute clusters listed in the CR definitions, for example:
govc tags.attach -c k8s-region room1 /dc1/host/room1-mgmt
Use the full path for each compute cluster.
Attach the zone tags to all the host objects listed in the CR definitions, for example:
govc tags.attach -c k8s-zone rack1 /dc1/host/room1-mgmt/esx-01a.corp.tanzu
Use the full path for each host.
For the next steps to deploy the cluster, see Deploy a Workload Cluster with Nodes Spread Across Availability Zones.
After you have performed the steps in Spread Nodes Across Multiple Compute Clusters in a Datacenter or Spread Nodes Across Multiple Hosts in a Single Compute Cluster, you can deploy a workload cluster with its nodes spread across multiple availability zones.
Create the cluster configuration file for the workload cluster you are deploying.
VSPHERE_REGION
and VSPHERE_ZONE
to the region and zone tag categories, k8s-region
and k8s-zone
in the example above.VSPHERE_AZ_0
, VSPHERE_AZ_1
, VSPHERE_AZ_2
with the names of the VsphereDeploymentZone objects where the machines need to be deployed.VSPHERE_AZ_0
is the failureDomain in which the machine deployment ending with md-0
gets deployed, similarly VSPHERE_AZ_1
is the failureDomain in which the machine deployment ending with md-1
gets deployed, and VSPHERE_AZ_2
is the failureDomain in which the machine deployment ending with md-2
gets deployedWORKER_MACHINE_COUNT
sets the total number of workers for the cluster. The total number of workers are distributed in a round-robin fashion across the number of AZs specifiedFor the full list of options that you must specify when deploying workload clusters to vSphere, see the Configuration File Variable Reference.
Run tanzu cluster create
to create the workload cluster. For more information, see Create Workload Clusters.