Note: The following configuration is specific to Google Kubernetes Engine (GKE). Other cloud vendors will have their own prescriptive methods for connecting to Kubernetes cluster.

Prerequisite

Set Up Google Kubernetes Engine (GKE) Clusters

  1. Create a unique namespace on each GKE cluster. The GemFire cluster will reside in this namespace. For example, create cluster-a-namespace in GKE Cluster A and cluster-b-namespace in Cluster B.

  2. Create a LoadBalancer service in both Cluster A and Cluster B. For example:

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
      labels:
        k8s-app: kube-dns
      name: kube-dns-lb
      namespace: kube-system
    spec:
      ports:
      - name: dns
        port: 53
        protocol: UDP
        targetPort: 53
      selector:
        k8s-app: kube-dns
      sessionAffinity: None
      type: LoadBalancer
    
  3. After you create the load balancers, record the External IP address of each cluster’s load balancer. You can retrieve these by running:

    kubectl get services -n kube-system
    

    For example:

    $ kubectl get services -n kube-system
      NAME                 TYPE          CLUSTER-IP    EXTERNAL-IP     PORT(S)        AGE
      default-http-backend NodePort      10.56.42.20   <none>          80:30147/TCP   18d
      kube-dns             ClusterIP     10.56.31.10   <none>          53/UDP,53/TCP  18d
      kube-dns-lb          LoadBalancer  10.56.46.95   34.136.194.232  53:30735/UDP   68s
      metrics-server       ClusterIP     10.56.47.130  <none>          433/TCP        18d
    
  4. In each cluster, add the External IP address of the load balancer of the other cluster to the ConfigMap for kube-dns.

    1. To the ConfigMap for kube-dns for Cluster A, add the External IP address of the Cluster B load balancer:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: kube-dns
        namespace: kube-system
      data:
        stubDomains: |
          {
            "cluster-b-namespace.svc.cluster.local": ["<Cluster B External IP address>"]
          }
      
    2. To the ConfigMap for kube-dns for Cluster B, add the External IP address of the Cluster A load balancer:

      apiVersion: v1
      kind: ConfigMap
      metadata:    
        name: kube-dns
        namespace: kube-system
      data:
        stubDomains: |
          {
            "cluster-a-namespace.svc.cluster.local": ["<Cluster A External IP address>"]
          }
      
  5. Restart the kube-dns pods in both clusters by running:

    kubectl delete pods -l k8s-app=kube-dns --namespace kube-system
    
  6. Create a firewall rule that allows GKE clusters to communicate with each other as follows:

    1. In a web browser, open Google Cloud Console.
    2. Select Menu > VPC Network -> Firewall.
    3. Create a New Firewall Rule with the following settings:
      • Targets: All instances in the network
      • Source filter: IPv4 ranges
      • Source IPv4 ranges: 10.0.0.0/8
      • Second source filter: none
      • Protocols and ports: Specified protocols and ports
        • tpc: 10334,1530-1555,53
        • udp: 53
  7. If necessary, use cert-manager to generate a certificate to be used by the GemFire clusters and Operators as follows:

    1. Use the following example Certificate resource with cert-manager to create a certificate in a Secret named custom-override-secret. Note that the dnsNames field must include the locator and server services for each GemFire cluster and fully qualified access to all pods in the GemFire clusters.

      apiVersion: v1
      kind: Secret
      metadata:
        name: custom-override-secret
      stringData:
        password: password
      ---
      apiVersion: cert-manager.io/v1
      kind: Issuer
      metadata:
        name: custom-selfsigned-issuer
        namespace: cert-manager
      spec:
        selfSigned: {}
      ---
      apiVersion: cert-manager.io/v1
      kind: Certificate
      metadata:
        name: custom-ca-cert
        namespace: cert-manager
      spec:
        duration: 17520h # 2y
        renewBefore: 720h # 30d
        subject:
          organizations:
          - VMware
        isCA: true
        commonName: custom-ca-cert
        issuerRef:
          kind: Issuer
          name: custom-selfsigned-issuer
        secretName: custom-ca-cert
      ---
      apiVersion: cert-manager.io/v1
      kind: ClusterIssuer
      metadata:
        name: custom-issuer
        namespace: cert-manager
      spec:
        ca:
          secretName: custom-ca-cert
      ---
      apiVersion: cert-manager.io/v1
      kind: Certificate
      metadata:
        name: custom-override-secret
      spec:
        duration: 17520h # 2yr
        renewBefore: 360h # 15d
        subject:
          organizations:
          - VMware
        commonName: custom-override-secret
        isCA: false
        privateKey:
          algorithm: RSA
          encoding: PKCS1
          size: 2048
        usages:
        - server auth
        - client auth
        dnsNames:
        - 'cluster-a-locator.cluster-a-namespace.svc.cluster.local'
        - 'cluster-a-server.cluster-a-namespace.svc.cluster.local'
        - '*.cluster-a-locator.cluster-a-namespace.svc.cluster.local'
        - '*.cluster-a-server.cluster-a-namespace.svc.cluster.local'
        - 'cluster-b-locator.cluster-b-namespace.svc.cluster.local'
        - 'cluster-b-server.cluster-b-namespace.svc.cluster.local'
        - '*.cluster-b-locator.cluster-b-namespace.svc.cluster.local'
        - '*.cluster-b-server.cluster-b-namespace.svc.cluster.local'
        issuerRef:
          kind: ClusterIssuer
          name: custom-issuer
          group: cert-manager.io
        secretName: custom-override-secret
        keystores:
          pkcs12:
            create: true
            passwordSecretRef: # Password used to encrypt the keystore
              key: password
              name: custom-override-secret
          jks:
            create: true
            passwordSecretRef: # Password used to encrypt the keystore
              key: password
              name: custom-override-secret
      
    2. When the certificate is ready, copy the certificate from the data section of the custom-override-secret Secret into a new Secret located in the namespaces where the GemFire clusters and GemFire operators are located. For more information, see Providing a Custom TLS Certificate for the GemFire Operator in Transport Layer Security.

      For example:
      1. In Cluster A, apply the following YAML files to create a Secret:
        ---
        apiVersion: v1
        kind: Secret
        metadata:
          name: custom-override-secret
          namespace: cluster-a-namespace
        data:
          <COPIED DATA>
        
      2. For the Operator, assuming it will be installed in the default namespace, apply the following YAML file to create a Secret:
        ---
        apiVersion: v1
        kind: Secret
        metadata:
          name: custom-override-secret
          namespace: default
        data:
          <COPIED DATA>
        
      3. In Cluster B, apply the following YAML files to create a Secret:
        ---
        apiVersion: v1
        kind: Secret
        metadata:
          name: custom-override-secret
          namespace: cluster-b-namespace
        data:
          <COPIED DATA>
        
      4. For the Operator, assuming it will be installed in the default namespace, apply the following YAML file to create a Secret:
        ---
        apiVersion: v1
        kind: Secret
        metadata:
          name: custom-override-secret
          namespace: default
        data:
          <COPIED DATA>
        

Install GemFire CRD and Operator

  1. Install the GemFire CRD and Operator in both GKE clusters.

    Note: Set the tlsSecretName with the name of the Secret when installing the Operator. For example:

    helm install gemfire-operator build/gemfire-operator-2.0.0.tgz --set tlsSecretName=custom-override-secret
    

For more information, see: * Install or Uninstall the Tanzu GemFire Operator * The Custom Resource Definition

Deploy GemFire Clusters

  1. Deploy the GemFire clusters in Cluster A and Cluster B.

    Note: The appropriate GemFire properties are set, specifically the remote-locators. For more information, see Multi-site (WAN) Configuration in the VMware Tanzu GemFire documentation.

    The spec image is a version of GemFire that is compatible with the version of the operator yaml below.

    In Cluster A, apply the following yaml:

    apiVersion: gemfire.vmware.com/v1
    kind: GemFireCluster  
    metadata:  
      name: cluster-a
      namespace: cluster-a-namespace
    spec:  
      image: registry.tanzu.vmware.com/pivotal-gemfire/vmware-gemfire:9.15.0
      security:  
        tls: secretName: custom-override-secret
        clientAuthenticationRequired: false  
      metrics:  
        emission: Default  
      locators:  
        replicas: 2  
        resources:  
          requests:  
            memory: 1Gi  
        overrides:  
          gemFireProperties:  
          - name: "distributed-system-id"  
            value: "10"  
          - name: "remote-locators"  
            value: "cluster-b-locator-0.cluster-b-locator.cluster-b-namespace.svc.cluster.local[10334]"
       servers:  
         replicas: 2  
         resources:  
           requests:  
             memory: 1Gi  
    

    In Cluster B, apply the following yaml:

    apiVersion: gemfire.vmware.com/v1
    kind: GemFireCluster
    metadata:
      name: cluster-b
      namespace: cluster-b-namespace 
    spec:  
      image: registry.tanzu.vmware.com/pivotal-gemfire/vmware-gemfire:9.15.0
      security:
        tls: {}
      antiAffinityPolicy: None  
      metrics:
        emission: Default  
      locators:  
        replicas: 2  
        resources:  
          requests:  
            memory: 1Gi  
        overrides:  
          gemFireProperties:  
          - name: "distributed-system-id"  
            value: "11"  
          - name: "remote-locators"  
            value: "cluster-a-locator-0.cluster-a-locator.cluster-a-namespace.svc.cluster.local[10334]"  
      servers:  
        replicas: 2  
        resources:  
          requests:  
            memory: 1Gi  
    
  2. Create the GatewaySender and region in Cluster A as follows:

    1. In a terminal window, shell into the locator:

      kubectl exec -it cluster-a-locator-0 -n cluster-a-namespace sh
      
    2. Within the shell on the locator pod, retrieve the FQDN of the locator:

      hostname -f # outputs the locator fqdn
      
    3. Launch gfsh. Within gfsh, connect to the locator that you identified:

      gfsh
      ...
      ...
      gfsh> connect --locator=<LOCATOR-FQDN>[10334] --security-properties-file=/security/gfsecurity.properties
      
    4. Create the GatewaySender in Cluster A:

      gfsh> create gateway-sender --id="new-york" --parallel=true --remote-distributed-system-id=11 
      
    5. Create the region:

      gfsh> create region --name=stocks --type=PARTITION --gateway-sender-id="new-york"
      
  3. Create the GatewayReceiver and region in Cluster B as follows:

    1. In a terminal window, shell into the locator:

      kubectl exec -it cluster-a-locator-0 -n cluster-a-namespace sh
      
    2. Within the shell on the locator pod, retrieve the FQDN of the locator:

      hostname -f # outputs the locator fqdn
      
    3. Launch gfsh. Within gfsh, connect to the locator that you identified:

      gfsh
      ...
      ...
      gfsh> connect --locator=<LOCATOR-FQDN>[10334] --security-properties-file=/security/gfsecurity.properties
      
    4. Create the region:

      gfsh> create region --name=stocks --type=PARTITION
      
    5. Create the GatewayReceiver:

      gfsh> create gateway-receiver --start-port=1530 --end-port=1551
      
  4. Verify WAN replication functionality:

    1. In Cluster A, put an entry into the region using gfsh:

      gfsh> put --key="VMW" --value="113" --region=stocks
      
    2. In Cluster B, inspect the region to see the new entry:

      gfsh> describe region --name=stocks
      
check-circle-line exclamation-circle-line close-line
Scroll to top icon