This topic describes how to enable zone aware when using VMware Tanzu GemFire for Kubernetes.

GemFire clusters can automatically gather zone labels from the nodes hosting the pods and configure the GemFire redundancy-zone property. This ensures that redundant data in GemFire is distributed across different zones. By default, the feature is disabled and requires manual steps to create RBACs that allow the GemFire cluster pods permission to retrieve node information.

Prerequisites

Create a GemFireCluster Configured with Zone Aware

To create a GemFireCluster configured for using zone aware:

  1. Save the following ClusterRoleBinding and ClusterRole definition to zone-aware-rbac.yaml.

    Zone Aware requires RBACs for getting node information from the Kubernetes API service. The RBAC must be on the service account used by the GemFire cluster. Users need to manually add this RBAC.

    This example will create a ClusterRole and ClusterRoleBinding for the default ServiceAccount in the default namespace:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
     name: zone-aware-node-reader-role
    rules:
     - apiGroups: [""]
       resources: ["nodes"]
       verbs: ["get"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
     name: zone-aware-binding
    roleRef:
     apiGroup: rbac.authorization.k8s.io
     kind: ClusterRole
     name: zone-aware-node-reader-role
    subjects:
     - kind: ServiceAccount
       name: default
       namespace: default
    
  2. Apply the ClusterRole and ClusterRoleBinding.

    kubectl apply -f zone-aware-rbac.yaml
    
  3. Save the following GemFireCluster definition to my-gemfire-cluster.yaml.

    apiVersion: gemfire.vmware.com/v1
    kind: GemFireCluster
    metadata:
      name: my-gemfire-cluster
    spec:
      zoneAware:
        enabled: true 
        zoneLabel: topology.kubernetes.io/zone
    
  4. Create the GemFireCluster:

    kubectl apply -f my-gemfire-cluster.yaml
    

    Optionally, using statefulset overrides will allow additional control over the topology constraints for pod scheduling.

To ensure that all GemFire Cluster pods are spread across zones evenly, you can use the label selector app.kubernetes.io/instance: my-gemfire-cluster. However, if you need to configure server and locator pods separately, you can use the specific label selectors gemfire.vmware.com/app: my-gemfire-cluster-server for server pods and gemfire.vmware.com/app: my-gemfire-cluster-locator for locator pods as shown in the example below:

apiVersion: gemfire.vmware.com/v1
kind: GemFireCluster
metadata:
  name: my-gemfire-cluster
spec:
  zoneAware:
    enabled: true 
    zoneLabel: topology.kubernetes.io/zone
  locator:
    overrides:
      statefulSet:
        spec:
          template:
            spec:
              containers: []
              topologySpreadConstraints:
                - maxSkew: 1
                  topologyKey: topology.kubernetes.io/zone
                  whenUnsatisfiable: DoNotSchedule
                  labelSelector:
                    matchLabels:
                      gemfire.vmware.com/app: my-gemfire-cluster-locator

  server:
    overrides:
      statefulSet:
        spec:
          template:
            spec:
              containers: []
              topologySpreadConstraints:
                - maxSkew: 1
                  topologyKey: topology.kubernetes.io/zone
                  whenUnsatisfiable: DoNotSchedule
                  labelSelector:
                    matchLabels:
                      gemfire.vmware.com/app: my-gemfire-cluster-server
check-circle-line exclamation-circle-line close-line
Scroll to top icon