Refer to this section for a conceptual description of the vSphere hosts and VM configuration for running the vSphere IaaS control plane on a vSAN stretched cluster topology in active/active deployment mode.

You can operate vSphere IaaS control plane components on a vSAN stretched cluster topology in active-active deployment mode. Refer to the vSphere documentation for details on VM Group, Host Group, and VM/Host Rules: vSphere Resource Management documentation.

Active/Active Deployment Mode

In active/active deployment mode, you balance Supervisor and TKG cluster node VMs across the two vSAN stretched cluster sites using vSphere Host Groups, VM Groups, and VM to Host Affinity Rules. Because both sites are active, VM placement can be on either site as long as grouping and balancing are respected.

The following information provides an overview of the group and rule configuration for active/active deployment. For detailed instructions, see Active/Active Configuration for vSphere IaaS Control Plane on vSAN Stretched Cluster.
Host Groups
In an active/active deployment, create two Host Groups, one for each site. Add participating ESXi hosts to each Host Group.
For instructions, see Create Host Group for Site 1 and for Site 2.
Supervisor Control Plane VMs
Supervisor control plane node VMs must be grouped. Use a VM to Host affinity rule to bind the Supervisor control plane VM Group to either the site 1 or the site 2 Host Group.
For instructions, see Create VM Group for the Supervisor Control Plane VMs and Create VM to Hosts Rule for Supervisor Control Plane VMs.
TKG Service Cluster Control Plane VMs
TKG Service cluster control plane VMs must be grouped. For each cluster, use a VM to Host affinity rule to bind the VM Group to either the site 1 or the site 2 Host Group. If there are multiple clusters, create a VM Group for each cluster control plane and bind each VM Group to a site Host Group in a balanced manner.
For instructions, see Create VM Group for TKG Service Cluster Control Plane VMs and Create VM to Hosts Rule for TKG Service Cluster Control Plane VMs.
TKG Service Worker Node VMs
TKG Service cluster worker node VMs should be spread across the two sites. The recommended approach is to create two worker node VM Groups, and use a VM to Host affinity rule to bind each VM group to one of the site Host Groups. Use a round robin approach to add worker node VMs to each worker VM Group so that worker nodes are distributed across the two sites in a balanced fashion. Ensure that worker nodes in the same node pool are distributed across the two sites.
For instructions, see Create VM Groups for TKG Service Cluster Worker VMs and Create VM to Hosts Rules for TKG Service Cluster Worker VMs.

Active/Active Deployment Example

Consider the following deployment example.
  • vSAN stretched cluster with 6 ESXi hosts
  • Supervisor is deployed on a single vSphere Zone
  • TKG cluster 1 is provisioned with 3 control plane nodes, 1 worker node pool, and 3 worker nodes
  • TKG cluster 2 is provisioned with 3 control plane nodes, 1 worker node pool, and 2 worker nodes
  • TKG cluster 3 is provisioned with 3 control plane nodes and 2 worker node pools: pool 1 has 3 worker nodes, pool 2 has 4 worker nodes
The table describes the Host Groups, VM Groups, and VM to Host Affinity Rules that you could configure for this deployment.
Table 1. Active/Active Deployment Example
Site 1 Site 2
Host Group 1 with 3 ESXi hosts Host Group 2 with 3 ESXI hosts
  • Supervisor CP VM Group with 3 VMs
  • VM to Hosts affinity rule should bind to site 1 Host Group
  • TKG Cluster 1 CP VM Group with 3 VMs
  • VM to Hosts affinity rule should bind to site 2 Host Group
  • TKG Cluster 2 CP VM Group with 3 VMs
  • VM to Hosts affinity rule should bind to site 1 Host Group
  • TKG Cluster 3 CP VM Group with 3 VMs
  • VM to Hosts affinity rule should bind to site 2 Host Group
  • Worker 1 VM Group with 6 worker node VMs:
    • 2 from cluster 1
    • 1 from cluster 2
    • 1 from cluster 3 pool 1
    • 2 from cluster 3 pool 2
  • VM to Hosts affinity rule should bind to site 1 Host Group
  • Worker 2 VM Group with 6 worker node VMs:
    • 1 from cluster 1
    • 1 from cluster 2
    • 2 from cluster 3 pool 1
    • 2 from cluster 3 pool 2
  • VM to Hosts affinity rule should bind to site 2 Host Group

Default Host Affinity Rules for vSphere IaaS control plane Components

vSphere IaaS control plane includes default host affinity and anti-affinity rules that enforce key architectural aspects of the solution. You cannot change these rules, but it is important to understand them before you configure the vSphere IaaS control plane to run on a vSAN stretched cluster.
Supervisor Control Plane VMs
Supervisor control plane VMs have an anti-affinity relationship with each other and are placed on separate ESXi hosts. The system allows 1 Supervisor control plane VM per ESXi host, hence a minimum of 3 ESXi hosts is required, with 4 recommended for upgrade purposes.
During a vCenter Server upgrade, Supervisor control plane VMs may be migrated to the same ESXi host when there is limited host availability. During Supervisor upgrade a fourth Supervisor control plane VM is created and started on an available ESXi host.
TKG Service Cluster Control Plane Node VMs
TKG Service cluster control plane VMs have an anti-affinity relationship with each other and are placed on separate ESXi hosts.
TKG Service Cluster Worker Node VMs
TKG Service cluster worker node VMs do not have any anti-affinity rules. As such, you need to manually create these rules when deploying clusters in a vSAN stretched cluster topology.

Custom VM Groups and Rules Are Deleted on Update of vSphere IaaS control plane Components

On update of vCenter Server or Supervisor, the control plane VM Group and VM to Host Affinity Rule will be deleted. You will need to manually recreate the group and rule after the update completes.

On update of a TKG Service cluster, the VM Groups and VM to Host Affinity Rules you have created for control plane and worker nodes will be deleted. You will need to manually recreate the groups and rules after the update completes. Note that rolling updates of clusters can be initiated manually or automatically by the system. See Understanding the Rolling Update Model for TKG Clusters on Supervisor.

If you do not recreate the groups and rules after a system update, the behaviour of vSphere IaaS control plane in a vSAN stretched cluster topology is undefined and not supported.