When you deploy vSphere Container Storage Plug-in in a vSphere environment that includes multiple data centers or host clusters, you can use zoning. Zoning enables orchestration systems, such as Kubernetes, to integrate with vSphere storage resources that are not equally available to all nodes. With zoning, the orchestration system can make intelligent decisions when dynamically provisioning volumes. The intelligent volume provisioning helps you avoid situations such as those where a pod cannot start because the storage resource it needs is not accessible.

Guidelines and Best Practices for Deployment with Topology

When you deploy vSphere Container Storage Plug-in using zoning, follow these guidelines and best practices.
  • If you already use vSphere Container Storage Plug-in to run applications, but haven't used the topology feature, you must re-create the entire cluster and delete any existing PVCs in the system to be able to use the topology feature.
  • Depending on your vSphere storage environment, you can use different deployment scenarios for availability zones. For example, you can define availability zones per host, host cluster, data center or have a combination of these.
  • Kubernetes assumes that even though the topology labels applied on a node are mutable, a node will not move between zones without being destroyed and re-created. See https://kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesiozone. As a result, if you define the availability zones on the host level, make sure to pin the node VMs to their respective hosts to prevent migration of these VMs to other availability zones. You can do this by either creating the node VM on the hosts' local datastore or by setting the DRS VM-Host affinity rules. For information, see VM-Host Affinity Rules.

    An exception to this guideline is an active-passive setup that has storage replicated between two topology domains as specified in the diagram in Deploy Workloads on a Preferential Datastore in a Topology-Aware Environment. In this case, you can migrate node VMs temporarily when either of the topology domains is down.

  • Distribute your control plane VMs across the availability zones to ensure high availability.
  • Have at least one worker node VM in each availability zone. Following this guideline is helpful when you use a StorageClass with no topology requirements explicitly set, and Kubernetes randomly selects a topology domain to schedule a pod in it. In such cases, if the topology domain does not have a worker node, the pod remains in pending state.
  • Do not apply topology related vSphere tags to node VMs. vSphere Container Storage Plug-in cannot read topology labels applied on the nodes.
  • Each node VM in a topology-aware Kubernetes cluster must belong to a tag under each category mentioned in the topology-categories parameter in Step 2.a. Node registration fails if a node does not belong to every category under the topology-categories parameter.
  • Use the topology-categories parameter in the Labels section of the vSphere configuration file. This parameter adds topology custom labels to the nodes: topology.csi.vmware.com/category-name.

    vSphere Container Storage Plug-in of releases earlier than 2.4 does not support the topology-categories parameter.

    vSphere Secret Configuration Sample Labels on a Node
    [Labels]
    topology-categories = "k8s-region,k8s-zone"
    Name:               k8s-node-0179
    Roles:              <none>
    Labels:             topology.csi.vmware.com/k8s-region=region-1
                        topology.csi.vmware.com/k8s-zone=zone-a 
    Annotations: ....

Sample vCenter Server Topology Configuration

In the following example, the vCenter Server environment consists of four clusters across two regions. Availability zones are defined per data center as well as per host cluster.

Data Center Cluster Node VM
Datacenter1
  • Availability Zone Category: k8s-region
  • Tag: region-1
Cluster1
  • Availability Zone Category: k8s-zone
  • Tag: zone-A

ControlPlaneVM1

WorkerNodeVM1

Cluster2
  • Availability Zone Category: k8s-zone
  • Tag: zone-B

ControlPlaneVM2

WorkerNodeVM2

WorkerNodeVM3

Cluster3
  • Availability Zone Category: k8s-zone
  • Tag: zone-C

ControlPlaneVM3

WorkerNodeVM4

Datacenter2
  • Availability Zone Category: k8s-region
  • Tag: region-2
Cluster4
  • Availability Zone Category: k8s-zone
  • Tag: zone-D

WorkerNodeVM5

WorkerNodeVM6

Deploy vSphere Container Storage Plug-in with Topology

Use this task for a greenfield deployment of vSphere Container Storage Plug-in with topology.

Prerequisites

To be able to create tags for your zones, make sure that you meet the following prerequisites:
  • Have appropriate tagging privileges that control your ability to work with tags.
  • Ancestors of node VMs, such as a host, cluster, folder, and data center, must have the ReadOnly role set for the vSphere user configured to use vSphere Container Storage Plug-in. This is required to allow reading tags and categories to discover the nodes' topology. For more information, see vSphere Tagging Privileges in the vSphere Security documentation.

Procedure

  1. In the vSphere Client, create zones using vSphere tags.
    Follow these naming guidelines:
    • The names you use for the tag categories and tags must be 63 characters or less, begin and end with an alphanumeric character, and contain only dashes, underscores, dots, or alphanumerics in between.
    • Do not use the same name for two different tag categories.
    • Tags you create should be unique across topology domains. For example, you cannot have zone1 under Region1 and zone1 under Region2.
    For information, see Create, Edit, or Delete a Tag Category in the vCenter Server and Host Management documentation.
    1. Create two tag categories: k8s-zone and k8s-region.
    2. Under each category, create tags.
      Category Tag
      k8s-region region-1

      region-2

      k8s-zone

      zone-A

      zone-B

      zone-C

      zone-D

    3. Apply corresponding tags to the data center and clusters as indicated in the following diagram.
      For information, see Assign or Remove a Tag in the vCenter Server and Host Management documentation.
      vSphere Object Tag
      Datacenter1 region-1
      Datacenter2 region-2
      Cluster1 zone-A
      Cluster2 zone-B
      Cluster3 zone-C
      Cluster4 zone-D
  2. Enable topology when deploying vSphere Container Storage Plug-in.
    1. In the vSphere configuration file, add entries to create a topology-aware setup.

      For information about the configuration file, see Create a Kubernetes Secret for vSphere Container Storage Plug-in.

      [Labels]
      topology-categories = "k8s-region, k8s-zone"

      The parameter topology-categories takes in a comma separated list of availability zones that correspond to the Categories that the vSphere administrator created in Step 1.a. You can add a maximum of five categories in the topology-categories parameter.

    2. Deploy topology-aware vSphere Container Storage Plug-in.
      Make sure the external-provisioner sidecar is deployed with the arguments --feature-gates=Topology=true and --strict-topology.
      To do this, search for the following lines and uncomment them in the YAML file https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/v2.7.0/manifests/vanilla.
      Note: To be able to take advantage of the latest bug fixes and feature updates, make sure to use the most recent version of vSphere Container Storage Plug-in. For versions and updates, check VMware vSphere Container Storage Plug-in 2.7 Release Notes.
      # needed only for topology aware setup
                  #- "--feature-gates=Topology=true"
                  #- "--strict-topology"
      For information about deploying vSphere Container Storage Plug-in, see Install vSphere Container Storage Plug-in.
  3. After the installation, verify the topology-aware setup.
    1. Verify that all csinodes objects include the topologyKeys parameter.
      $ kubectl get csinodes -o jsonpath='{range .items[*]}{.metadata.name} {.spec}{"\n"}{end}'
      k8s-control-1 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-control-1","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/k8s-zone"]}]}
      k8s-control-2 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-control-2","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/k8s-zone"]}]}
      k8s-control-3 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-control-3","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/k8s-zone"]}]}
      k8s-node-1 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-node-1","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/k8s-zone"]}]}
      k8s-node-2 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-node-2","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/k8s-zone"]}]}
      k8s-node-3 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-node-3","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/k8s-zone"]}]}
      k8s-node-4 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-node-4","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/k8s-zone"]}]}
      k8s-node-5 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-node-5","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/k8s-zone"]}]}
      k8s-node-6 {"drivers":[{"name":"csi.vsphere.vmware.com","nodeID":"k8s-node-6","topologyKeys":["topology.csi.vmware.com/k8s-region","topology.csi.vmware.com/k8s-zone"]}]}
    2. Verify that all nodes have the right topology labels set on them.

      Your output must look similar to the sample below.

      $ kubectl get nodes --show-labels
      NAME            STATUS   ROLES                  AGE  VERSION   LABELS
      k8s-control-1   Ready    control-plane          1d   v1.21.1   topology.csi.vmware.com/k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-A
      k8s-control-2   Ready    control-plane          1d   v1.21.1   topology.csi.vmware.com/k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-B
      k8s-control-3   Ready    control-plane          1d   v1.21.1   topology.csi.vmware.com/k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-C
      k8s-node-1      Ready    <none>                 1d   v1.21.1   topology.csi.vmware.com/k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-A
      k8s-node-2      Ready    <none>                 1d   v1.21.1   topology.csi.vmware.com/k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-B
      k8s-node-3      Ready    <none>                 1d   v1.21.1   topology.csi.vmware.com/k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-B
      k8s-node-4      Ready    <none>                 1d   v1.21.1   topology.csi.vmware.com/k8s-region=region-1,topology.csi.vmware.com/k8s-zone=zone-C
      k8s-node-5      Ready    <none>                 1d   v1.21.1   topology.csi.vmware.com/k8s-region=region-2,topology.csi.vmware.com/k8s-zone=zone-D
      k8s-node-6      Ready    <none>                 1d   v1.21.1   topology.csi.vmware.com/k8s-region=region-2,topology.csi.vmware.com/k8s-zone=zone-D

What to do next

Deploy workloads using topology. See Topology-Aware Volume Provisioning.