This topic explains how to plan and configure your VMware Tanzu GemFire multi-site topology, and how to configure the regions that are shared between systems.

Prerequisites

Before you start, you should understand how to configure membership and communication in peer-to-peer systems using locators. See Configuring Peer-to-Peer Discovery.

WAN deployments increase the messaging demands on a Tanzu GemFire system. To avoid hangs related to WAN messaging, always use the default setting of conserve-sockets=false for Tanzu GemFire members that participate in a WAN deployment. See Configuring Sockets in Multi-Site (WAN) Deployments and Making Sure You Have Enough Sockets.

Main Steps

Use the following steps to configure a multi-site system:

  1. Plan the topology of your multi-site system. See Multi-site (WAN) Topologies for a description of different multi-site topologies.
  2. Configure membership and communication for each cluster in your multi-site system. You must use locators for peer discovery in a WAN configuration. See Configuring Peer-to-Peer Discovery. Start each cluster using a unique distributed-system-id and identify remote clusters using remote-locators. For example:

    locators=<locator1-address>[<port1>],<locator2-address>[<port2>]
    distributed-system-id=1
    remote-locators=<remote-locator-addr1>[<port1>],<remote-locator-addr2>[<port2>]
    
  3. Configure the gateway senders that you will use to distribute region events to remote systems. See Configure Gateway Senders.

  4. Create the data regions that you want to participate in the multi-site system, specifying the gateway senders that each region should use for WAN distribution. Configure the same regions in the target clusters to apply the distributed events. See Create Data Regions for Multi-site Communication.
  5. Configure gateway receivers in each Tanzu GemFire cluster that will receive region events from another cluster. See Configure Gateway Receivers.
  6. Start cluster member processes in the correct order (locators first, followed by data nodes) to ensure efficient discovery of WAN resources. See Starting Up and Shutting Down Your System.
  7. (Optional.) Deploy custom conflict resolvers to handle resolve potential conflicts that are detected when applying events from over a WAN. See Resolving Conflicting Events.
  8. (Optional.) Deploy WAN filters to determine which events are distributed over the WAN, or to modify events as they are distributed over the WAN. See Filtering Events for Multi-Site (WAN) Distribution.
  9. (Optional.) Configure persistence, conflation, and/or dispatcher threads for gateway sender queues using the instructions in Configuring Multi-Site (WAN) Event Queues.

Configure Gateway Senders

Each gateway sender configuration includes:

  • A unique ID for the gateway sender configuration.
  • The distributed system ID of the remote site to which the sender propagates region events.
  • A property that specifies whether the gateway sender is a serial gateway sender or a parallel gateway sender.
  • Optional properties that configure the gateway sender queue. These queue properties determine features such the amount of memory used by the queue, whether the queue is persisted to disk, and how one or more gateway sender threads dispatch events from the queue.

Note: To configure a gateway sender that uses gfsh to create the cache.xml configurations described below, as well as other options, see create gateway-sender.

See WAN Configuration for more information about individual configuration properties.

  1. For each Tanzu GemFire system, choose the members that will host a gateway sender configuration and distribute region events to remote sites:

    • You must deploy a parallel gateway sender configuration on each Tanzu GemFire member that hosts a region that uses the sender. Regions using the same parallel gateway sender ID must be colocated.
    • You may choose to deploy a serial gateway sender configuration on one or more Tanzu GemFire members in order to provide high availability. However, only one instance of a given serial gateway sender configuration distributes region events at any given time.
  2. Configure each gateway sender on a Tanzu GemFire member using gfsh, cache.xml or Java API:

    • gfsh configuration command

      gfsh>create gateway-sender --id="sender2" --parallel=true --remote-distributed-system-id="2"
      
      gfsh>create gateway-sender --id="sender3" --parallel=true --remote-distributed-system-id="3"
      
    • cache.xml configuration

      These example cache.xml entries configure two parallel gateway senders to distribute region events to two remote Tanzu GemFire clusters (clusters “2” and “3”):

      <cache>
        <gateway-sender id="sender2" parallel="true" 
         remote-distributed-system-id="2"/> 
        <gateway-sender id="sender3" parallel="true" 
         remote-distributed-system-id="3"/> 
         ... 
      </cache>
      
    • Java configuration

      This example code shows how to configure a parallel gateway sender using the API:

      // Create or obtain the cache
      Cache cache = new CacheFactory().create();
      
      // Configure and create the gateway sender
      GatewaySenderFactory gateway = cache.createGatewaySenderFactory();
      gateway.setParallel(true);
      GatewaySender sender = gateway.create("sender2", "2");
      sender.start();
      
  3. Depending on your applications, you may need to configure additional features in each gateway sender. Things you need to consider are:

    • The maximum amount of memory each gateway sender queue can use. When the queue exceeds the configured amount of memory, the contents of the queue overflow to disk. For example:

      gfsh>create gateway-sender --id=sender2 --parallel=true --remote-distributed-system-id=2 --maximum-queue-memory=150
      

      In cache.xml:

      <gateway-sender id="sender2" parallel="true"
       remote-distributed-system-id="2" 
       maximum-queue-memory="150"/> 
      
    • Whether to enable disk persistence, and whether to use a named disk store for persistence or for overflowing queue events. See Persisting an Event Queue. For example:

      gfsh>create gateway-sender --id=sender2 --parallel=true --remote-distributed-system-id=2 \
      --maximum-queue-memory=150 --enable-persistence=true --disk-store-name=cluster2Store
      

      In cache.xml:

      <gateway-sender id="sender2" parallel="true"
       remote-distributed-system-id="2" 
       enable-persistence="true" disk-store-name="cluster2Store"
       maximum-queue-memory="150"/> 
      
    • The number of dispatcher threads to use for processing events from each each gateway queue. The dispatcher-threads attribute of the gateway sender specifies the number of threads that process the queue (default of 5). For example:

      gfsh>create gateway-sender --id=sender2 --parallel=true --remote-distributed-system-id=2 \
      --dispatcher-threads=2 --order-policy=partition
      

      In cache.xml:

      <gateway-sender id="sender2" parallel="false"
       remote-distributed-system-id="2" 
       dispatcher-threads="2" order-policy="partition"/>
      

      Note: When multiple dispatcher threads are configured for a serial queue, each thread operates on its own copy of the gateway sender queue. Queue configuration attributes such as maximum-queue-memory are repeated for each dispatcher thread that you configure.

      See Configuring Dispatcher Threads and Order Policy for Event Distribution.

    • For serial gateway senders (parallel=false) that use multiple dispatcher-threads, also configure the ordering policy to use for dispatching the events. See Configuring Dispatcher Threads and Order Policy for Event Distribution.
    • Determine whether you should conflate events in the queue. See Conflating Events in a Queue.

Note: The gateway sender configuration for a specific sender id must be identical on each Tanzu GemFire member that hosts the gateway sender.

Create Data Regions for Multi-site Communication

When using a multi-site configuration, you choose which data regions to share between sites. Because of the high cost of distributing data between disparate geographical locations, not all changes are passed between sites.

Note these important restrictions on the regions:

  • Replicated regions cannot use a parallel gateway sender. Use a serial gateway sender instead.

  • In addition to configuring regions with gateway senders to distribute events, you must configure the same regions in the target clusters to apply the distributed events. The region name in the receiving cluster must exactly match the region name in the sending cluster.

  • Regions using the same parallel gateway sender ID must be colocated.

  • If any gateway sender configured for the region has the group-transaction-events flag set to true, then the regions involved in transactions must all have the same gateway senders configured with this flag set to true. This requires careful configuration of regions with gateway senders according to the transactions expected in the system.

    Example: Assuming the following scenario:

    • Gateway-senders:

      • sender1: group-transaction-events=true
      • sender2: group-transaction-events=true
      • sender3: group-transaction-events=true
      • sender4: group-transaction-events=false
    • Regions:

      • region1: gateway-sender-ids=sender1,sender2,sender4
        type: partition
        colocated-with: region2,region3
      • region2: gateway-sender-ids=sender1,sender2
        type: partition
        colocated with: region1,region3
      • region3: gateway-sender-ids=sender3
        type: partition
        colocated with: region1,region2
      • region4: gateway-sender-ids=sender4
        type: replicated
    • Events for the same transaction will be guaranteed to be sent in the same batch depending on the events involved in the transaction:

      • For transactions containing events for region1 and region2, it will be guaranteed that events for those transactions will be delivered in the same batch by sender1 and sender2.
      • For transactions containing events for region1, region2 and region3, it will NOT be guaranteed that events for those transactions will be delivered in the same batch .
      • For transactions containing events for region3, it will be guaranteed that events for those transactions will be delivered in the same batch.
      • For transactions containing events for region4, it will NOT be guaranteed that events for those transactions will be delivered in the same batch.

After you define gateway senders, configure regions to use the gateway senders to distribute region events.

  • gfsh Configuration

    gfsh>create region --name=customer --gateway-sender-id=sender2,sender3
    

    or to modify an existing region:

    gfsh>alter region --name=customer --gateway-sender-id=sender2,sender3
    
  • cache.xml Configuration

    Use the gateway-sender-ids region attribute to add gateway senders to a region. To assign multiple gateway senders, use a comma-separated list. For example:

    <region-attributes gateway-sender-ids="sender2,sender3">
    </region-attributes>
    
  • Java API Configuration

    This example shows adding two gateway senders (configured in the earlier example) to a partitioned region:

    RegionFactory rf = 
      cache.createRegionFactory(RegionShortcut.PARTITION);
    rf.addCacheListener(new LoggingCacheListener());
    rf.addGatewaySenderId("sender2");
    rf.addGatewaySenderId("sender3");
    custRegion = rf.create("customer");
    

    Note: When using the Java API, you must configure a parallel gateway sender before you add its id to a region. This ensures that the sender distributes region events that were persisted before new cache operations take place. If the gateway sender id does not exist when you add it to a region, you receive an IllegalStateException.

Configure Gateway Receivers

Always configure a gateway receiver in each Tanzu GemFire cluster that will receive and apply region events from another cluster.

A gateway receiver configuration can be applied to multiple Tanzu GemFire servers for load balancing and high availability. However, each Tanzu GemFire member that hosts a gateway receiver must also define all of the regions for which the receiver may receive an event. If a gateway receiver receives an event for a region that the local member does not define, Tanzu GemFire throws an exception. See Create Data Regions for Multi-site Communication.

Note: You can only host one gateway receiver per member.

A gateway receiver configuration specifies a range of possible port numbers on which to listen. The Tanzu GemFire server picks an unused port number from the specified range to use for the receiver process. You can use this functionality to easily deploy the same gateway receiver configuration to multiple members.

You can optionally configure gateway receivers to provide a specific IP address or host name for gateway sender connections. If you configure hostname-for-senders, locators will use the provided host name or IP address when instructing gateway senders on how to connect to gateway receivers. If you provide "" or null as the value, by default the gateway receiver’s bind-address will be sent to clients.

In addition, you can configure gateway receivers to start automatically or, by setting manual-start to true, to require a manual start. By default, gateway receivers start automatically.

Note: To configure a gateway receiver, you can use gfsh, cache.xml or Java API configurations as described below. For more information about configuring gateway receivers in gfsh, see create gateway-receiver.

  • gfsh configuration command

    gfsh>create gateway-receiver --start-port=1530 --end-port=1551 \
        --hostname-for-senders=gateway1.mycompany.com
    
  • cache.xml Configuration

    The following configuration defines a gateway receiver that listens on an unused port in the range from 1530 to 1550:

    <cache>
      <gateway-receiver start-port="1530" end-port="1551"
        hostname-for-senders="gateway1.mycompany.com" /> 
       ... 
    </cache>
    
  • Java API Configuration

    // Create or obtain the cache
    Cache cache = new CacheFactory().create();
    
    // Configure and create the gateway receiver
    GatewayReceiverFactory gateway = cache.createGatewayReceiverFactory();
    gateway.setStartPort(1530);
    gateway.setEndPort(1551);
    gateway.setHostnameForSenders("gateway1.mycompany.com");
    GatewayReceiver receiver = gateway.create();
    

    Note: When using the Java API, you must create any region that might receive events from a remote site before you create the gateway receiver. Otherwise, batches of events could arrive from remote sites before the regions for those events have been created. If this occurs, the local site will throw exceptions because the receiving region does not yet exist. If you define regions in cache.xml, the correct startup order is handled automatically.

After starting new gateway receivers, you can execute the load-balance gateway-sender command in gfsh so that a specific gateway sender will be able to rebalance its connections and connect new remote gateway receivers. Invoking this command redistributes gateway sender connections more evenly among all the gateway receivers.

Another option is to use the GatewaySender.rebalance Java API.

As an example, assume the following scenario:

  1. Create 1 receiver in site NY.
  2. Create 4 senders in site LN.
  3. Create 3 additional receivers in NY.

You can then execute the following in gfsh to see the effects of rebalancing:

gfsh -e "connect --locator=localhost[10331]" -e "list gateways"
...
(2) Executing - list gateways

GatewaySender Section

GatewaySender Id |              Member               | Remote Cluster Id |   Type   | Status  | Queued Events | Receiver Location
---------------- | --------------------------------- | ----------------- | -------- | ------- | ------------- | -----------------
ln               | mymac(ny-1:88641)<v2>:33491       | 2                 | Parallel | Running | 0             | mymac:5037
ln               | mymac(ny-2:88705)<v3>:29329       | 2                 | Parallel | Running | 0             | mymac:5064
ln               | mymac(ny-3:88715)<v4>:36808       | 2                 | Parallel | Running | 0             | mymac:5132
ln               | mymac(ny-4:88724)<v5>:52993       | 2                 | Parallel | Running | 0             | mymac:5324

GatewayReceiver Section

             Member               | Port | Sender Count | Senders Connected
--------------------------------- | ---- | ------------ | --------------------------------------------------------------------------
mymac(ny-1:88641)<v2>:33491       | 5057 | 24           | ["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-2:88662)<v3>:12796","mymac(ln-3:88672)<v4>:43675"]
mymac(ny-2:88705)<v3>:29329       | 5082 | 0            | []
mymac(ny-3:88715)<v4>:36808       | 5371 | 0            | []
mymac(ny-4:88724)<v5>:52993       | 5247 | 0            | []

Execute the load-balance command:

gfsh -e "connect --locator=localhost[10441]" -e "load-balance gateway-sender --id=ny"...

(2) Executing - load-balance gateway-sender --id=ny

             Member               | Result | Message
--------------------------------- | ------ |--------------------------------------------------------------------------
mymac(ln-1:88651)<v2>:48277       | OK     | GatewaySender ny is rebalanced on member mymac(ln-1:88651)<v2>:48277
mymac(ln-4:88681)<v5>:42784       | OK     | GatewaySender ny is rebalanced on member mymac(ln-4:88681)<v5>:42784
mymac(ln-3:88672)<v4>:43675       | OK     | GatewaySender ny is rebalanced on member mymac(ln-3:88672)<v4>:43675
mymac(ln-2:88662)<v3>:12796       | OK     | GatewaySender ny is rebalanced on member mymac(ln-2:88662)<v3>:12796

Listing gateways in ny again shows the connections are spread much better among the receivers.

gfsh -e "connect --locator=localhost[10331]" -e "list gateways"...

(2) Executing - list gateways

GatewaySender Section

GatewaySender Id |              Member               | Remote Cluster Id |  Type    | Status  | Queued Events | Receiver Location
---------------- | --------------------------------- |  ---------------- | -------- | ------- | ------------- | -----------------
ln               | mymac(ny-1:88641)<v2>:33491       | 2                 | Parallel | Running | 0             | mymac:5037
ln               | mymac(ny-2:88705)<v3>:29329       | 2                 | Parallel | Running | 0             | mymac:5064
ln               | mymac(ny-3:88715)<v4>:36808       | 2                 | Parallel | Running | 0             | mymac:5132
ln               | mymac(ny-4:88724)<v5>:52993       | 2                 | Parallel | Running | 0             | mymac:5324

GatewayReceiver Section

         Member                   | Port | Sender Count | Senders Connected
--------------------------------- | ---- | ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------
mymac(ny-1:88641)<v2>:33491       | 5057 | 9            |["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-3:88672)<v4>:43675","mymac(ln-2:88662)<v3>:12796"]
mymac(ny-2:88705)<v3>:29329       | 5082 | 4            |["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-3:88672)<v4>:43675"]
mymac(ny-3:88715)<v4>:36808       | 5371 | 4            |["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-3:88672)<v4>:43675"]
mymac(ny-4:88724)<v5>:52993       | 5247 | 3            |["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-3:88672)<v4>:43675"]

Running the load balance command in site ln again produces even better balance.

         Member                   | Port | Sender Count | Senders Connected
--------------------------------- | ---- | ------------ |-------------------------------------------------------------------------------------------------------------------------------------------------
mymac(ny-1:88641)<v2>:33491       | 5057 | 7            |["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-2:88662)<v3>:12796","mymac(ln-3:88672)<v4>:43675"]
mymac(ny-2:88705)<v3>:29329       | 5082 | 3            |["mymac(ln-1:88651)<v2>:48277","mymac(ln-3:88672)<v4>:43675","mymac(ln-2:88662)<v3>:12796"]
mymac(ny-3:88715)<v4>:36808       | 5371 | 5            |["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-2:88662)<v3>:12796","mymac(ln-3:88672)<v4>:43675"]
mymac(ny-4:88724)<v5>:52993       | 5247 | 6            |["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-2:88662)<v3>:12796","mymac(ln-3:88672)<v4>:43675"]

Configuring One IP Address and Port to Access All Gateway Receivers in a Site

You may have a WAN deployment in which you do not want to expose the IP address and port of every gateway receiver to other sites, but instead expose just one IP address and port for all gateway receivers. This way, the internal topology of the site is hidden to other sites. This case is common in cloud deployments, in which a reverse proxy/load balancer distributes incoming requests to the site (in this case, replication requests) among the available servers (in this case, gateway receivers).

Tanzu GemFire supports this configuration by means of a particular use of the hostname-for-senders, start-port, and end-port parameters of the gateway receiver.

In order to configure a WAN deployment that hides the gateway receivers behind the same IP address and port,

  • All the gateway receivers must have the same value for the hostname-for-senders parameter (the hostname or IP address to be used by clients to access them)
  • All gateway receivers must have the same value in the start-port and end-port parameters (the ports to be used by clients to access them).

The following example shows a deployment in which all gateway receivers of a site are accessed via the “gateway1.mycompany.com” hostname and port “1971”; every gateway receiver in the site must be configured as follows:

gfsh> create gateway-receiver --hostname-for-senders="gateway1.mycompany.com" --start-port=1971 --end-port=1971

The following output shows how the receiver side would look like after this configuration if four gateway receivers were configured:

Cluster-ny gfsh>list gateways
GatewayReceiver Section

                    Member        | Port | Sender Count | Senders Connected
----------------------------------| ---- | ------------ | --------------------------------------------------------------------------------------------------------------------------------------
192.168.1.20(ny1:21901)<v1>:41000 | 1971 | 1            | 192.168.0.13(ln4:22520)<v4>:41005
192.168.2.20(ny2:22150)<v2>:41000 | 1971 | 2            | 192.168.0.13(ln2:22004)<v2>:41003, 192.168.0.13(ln3:22252)<v3>:41004
192.168.3.20(ny3:22371)<v3>:41000 | 1971 | 2            | 192.168.0.13(ln3:22252)<v3>:41004, 192.168.0.13(ln2:22004)<v2>:41003
192.168.4.20(ny4:22615)<v4>:41000 | 1971 | 3            | 192.168.0.13(ln4:22520)<v4>:41005, 192.168.0.13(ln1:21755)<v1>:41002, 192.168.0.13(ln1:21755)<v1>:41002

When the gateway senders on one site are started, they get the information about the gateway receivers of the remote site from the locators running on the remote site. The remote locator provides a list of gateway receivers to send replication events to (one element per gateway receiver running in the site), with all of them listening on the same hostname and port. From the gateway sender’s standpoint, it is as if only one gateway receiver is on the other side.

The following output shows the gateways information at the sender side, in which it can be seen that there is only one IP address/hostname and port for the receiver location (gateway1.mycompany.com:1971), while the reality is that there are four gateway receivers on the other side.

Cluster-ln gfsh>list gateways
GatewaySender Section

GatewaySender Id |                    Member         | Remote Cluster Id |   Type   |        Status         | Queued Events | Receiver Location
---------------- | ----------------------------------| ----------------- | -------- | --------------------- | ------------- | ---------------------------
ny               | 192.168.0.13(ln2:22004)<v2>:41003 | 2                 | Parallel | Running and Connected | 0             | gateway1.mycompany.com:1971
ny               | 192.168.0.13(ln3:22252)<v3>:41004 | 2                 | Parallel | Running and Connected | 0             | gateway1.mycompany.com:1971
ny               | 192.168.0.13(ln4:22520)<v4>:41005 | 2                 | Parallel | Running and Connected | 0             | gateway1.mycompany.com:1971
ny               | 192.168.0.13(ln1:21755)<v1>:41002 | 2                 | Parallel | Running and Connected | 0             | gateway1.mycompany.com:1971

In order for the gateway senders to communicate with the remote gateway receivers, a reverse proxy/load balancer service must be in place in the deployment in order to receive the requests directed to the gateway receivers on the IP address and port configured, and to forward the requests to one of the gateway receivers on the remote site.

check-circle-line exclamation-circle-line close-line
Scroll to top icon