This topic explains how to plan and configure your VMware Tanzu GemFire multi-site topology, and how to configure the regions that are shared between systems.
Before you start, you should understand how to configure membership and communication in peer-to-peer systems using locators. See Configuring Peer-to-Peer Discovery.
WAN deployments increase the messaging demands on a Tanzu GemFire system. To avoid hangs related to WAN messaging, always use the default setting of conserve-sockets=false
for Tanzu GemFire members that participate in a WAN deployment. See Configuring Sockets in Multi-Site (WAN) Deployments and Making Sure You Have Enough Sockets.
Use the following steps to configure a multi-site system:
Configure membership and communication for each cluster in your multi-site system. You must use locators for peer discovery in a WAN configuration. See Configuring Peer-to-Peer Discovery. Start each cluster using a unique distributed-system-id
and identify remote clusters using remote-locators
. For example:
locators=<locator1-address>[<port1>],<locator2-address>[<port2>]
distributed-system-id=1
remote-locators=<remote-locator-addr1>[<port1>],<remote-locator-addr2>[<port2>]
Configure the gateway senders that you will use to distribute region events to remote systems. See Configure Gateway Senders.
Each gateway sender configuration includes:
Note: To configure a gateway sender that uses gfsh to create the cache.xml configurations described below, as well as other options, see create gateway-sender.
See WAN Configuration for more information about individual configuration properties.
For each Tanzu GemFire system, choose the members that will host a gateway sender configuration and distribute region events to remote sites:
Configure each gateway sender on a Tanzu GemFire member using gfsh, cache.xml
or Java API:
gfsh configuration command
gfsh>create gateway-sender --id="sender2" --parallel=true --remote-distributed-system-id="2"
gfsh>create gateway-sender --id="sender3" --parallel=true --remote-distributed-system-id="3"
cache.xml configuration
These example cache.xml
entries configure two parallel gateway senders to distribute region events to two remote Tanzu GemFire clusters (clusters “2” and “3”):
<cache>
<gateway-sender id="sender2" parallel="true"
remote-distributed-system-id="2"/>
<gateway-sender id="sender3" parallel="true"
remote-distributed-system-id="3"/>
...
</cache>
Java configuration
This example code shows how to configure a parallel gateway sender using the API:
// Create or obtain the cache
Cache cache = new CacheFactory().create();
// Configure and create the gateway sender
GatewaySenderFactory gateway = cache.createGatewaySenderFactory();
gateway.setParallel(true);
GatewaySender sender = gateway.create("sender2", "2");
sender.start();
Depending on your applications, you may need to configure additional features in each gateway sender. Things you need to consider are:
The maximum amount of memory each gateway sender queue can use. When the queue exceeds the configured amount of memory, the contents of the queue overflow to disk. For example:
gfsh>create gateway-sender --id=sender2 --parallel=true --remote-distributed-system-id=2 --maximum-queue-memory=150
In cache.xml:
<gateway-sender id="sender2" parallel="true"
remote-distributed-system-id="2"
maximum-queue-memory="150"/>
Whether to enable disk persistence, and whether to use a named disk store for persistence or for overflowing queue events. See Persisting an Event Queue. For example:
gfsh>create gateway-sender --id=sender2 --parallel=true --remote-distributed-system-id=2 \
--maximum-queue-memory=150 --enable-persistence=true --disk-store-name=cluster2Store
In cache.xml:
<gateway-sender id="sender2" parallel="true"
remote-distributed-system-id="2"
enable-persistence="true" disk-store-name="cluster2Store"
maximum-queue-memory="150"/>
The number of dispatcher threads to use for processing events from each each gateway queue. The dispatcher-threads
attribute of the gateway sender specifies the number of threads that process the queue (default of 5). For example:
gfsh>create gateway-sender --id=sender2 --parallel=true --remote-distributed-system-id=2 \
--dispatcher-threads=2 --order-policy=partition
In cache.xml:
<gateway-sender id="sender2" parallel="false"
remote-distributed-system-id="2"
dispatcher-threads="2" order-policy="partition"/>
Note: When multiple dispatcher threads are configured for a serial queue, each thread operates on its own copy of the gateway sender queue. Queue configuration attributes such as maximum-queue-memory
are repeated for each dispatcher thread that you configure.
See Configuring Dispatcher Threads and Order Policy for Event Distribution.
dispatcher-threads
, also configure the ordering policy to use for dispatching the events. See Configuring Dispatcher Threads and Order Policy for Event Distribution.Note: The gateway sender configuration for a specific sender id
must be identical on each Tanzu GemFire member that hosts the gateway sender.
When using a multi-site configuration, you choose which data regions to share between sites. Because of the high cost of distributing data between disparate geographical locations, not all changes are passed between sites.
Note these important restrictions on the regions:
Replicated regions cannot use a parallel gateway sender. Use a serial gateway sender instead.
In addition to configuring regions with gateway senders to distribute events, you must configure the same regions in the target clusters to apply the distributed events. The region name in the receiving cluster must exactly match the region name in the sending cluster.
Regions using the same parallel gateway sender ID must be colocated.
If any gateway sender configured for the region has the group-transaction-events
flag set to true, then the regions involved in transactions must all have the same gateway senders configured with this flag set to true. This requires careful configuration of regions with gateway senders according to the transactions expected in the system.
Example: Assuming the following scenario:
Gateway-senders:
Regions:
Events for the same transaction will be guaranteed to be sent in the same batch depending on the events involved in the transaction:
After you define gateway senders, configure regions to use the gateway senders to distribute region events.
gfsh Configuration
gfsh>create region --name=customer --gateway-sender-id=sender2,sender3
or to modify an existing region:
gfsh>alter region --name=customer --gateway-sender-id=sender2,sender3
cache.xml Configuration
Use the gateway-sender-ids
region attribute to add gateway senders to a region. To assign multiple gateway senders, use a comma-separated list. For example:
<region-attributes gateway-sender-ids="sender2,sender3">
</region-attributes>
Java API Configuration
This example shows adding two gateway senders (configured in the earlier example) to a partitioned region:
RegionFactory rf =
cache.createRegionFactory(RegionShortcut.PARTITION);
rf.addCacheListener(new LoggingCacheListener());
rf.addGatewaySenderId("sender2");
rf.addGatewaySenderId("sender3");
custRegion = rf.create("customer");
Note: When using the Java API, you must configure a parallel gateway sender before you add its id to a region. This ensures that the sender distributes region events that were persisted before new cache operations take place. If the gateway sender id does not exist when you add it to a region, you receive an IllegalStateException
.
Always configure a gateway receiver in each Tanzu GemFire cluster that will receive and apply region events from another cluster.
A gateway receiver configuration can be applied to multiple Tanzu GemFire servers for load balancing and high availability. However, each Tanzu GemFire member that hosts a gateway receiver must also define all of the regions for which the receiver may receive an event. If a gateway receiver receives an event for a region that the local member does not define, Tanzu GemFire throws an exception. See Create Data Regions for Multi-site Communication.
Note: You can only host one gateway receiver per member.
A gateway receiver configuration specifies a range of possible port numbers on which to listen. The Tanzu GemFire server picks an unused port number from the specified range to use for the receiver process. You can use this functionality to easily deploy the same gateway receiver configuration to multiple members.
You can optionally configure gateway receivers to provide a specific IP address or host name for gateway sender connections. If you configure hostname-for-senders, locators will use the provided host name or IP address when instructing gateway senders on how to connect to gateway receivers. If you provide "" or null as the value, by default the gateway receiver’s bind-address will be sent to clients.
In addition, you can configure gateway receivers to start automatically or, by setting manual-start
to true, to require a manual start. By default, gateway receivers start automatically.
Note: To configure a gateway receiver, you can use gfsh, cache.xml or Java API configurations as described below. For more information about configuring gateway receivers in gfsh, see create gateway-receiver.
gfsh configuration command
gfsh>create gateway-receiver --start-port=1530 --end-port=1551 \
--hostname-for-senders=gateway1.mycompany.com
cache.xml Configuration
The following configuration defines a gateway receiver that listens on an unused port in the range from 1530 to 1550:
<cache>
<gateway-receiver start-port="1530" end-port="1551"
hostname-for-senders="gateway1.mycompany.com" />
...
</cache>
Java API Configuration
// Create or obtain the cache
Cache cache = new CacheFactory().create();
// Configure and create the gateway receiver
GatewayReceiverFactory gateway = cache.createGatewayReceiverFactory();
gateway.setStartPort(1530);
gateway.setEndPort(1551);
gateway.setHostnameForSenders("gateway1.mycompany.com");
GatewayReceiver receiver = gateway.create();
Note: When using the Java API, you must create any region that might receive events from a remote site before you create the gateway receiver. Otherwise, batches of events could arrive from remote sites before the regions for those events have been created. If this occurs, the local site will throw exceptions because the receiving region does not yet exist. If you define regions in cache.xml
, the correct startup order is handled automatically.
After starting new gateway receivers, you can execute the load-balance gateway-sender command in gfsh
so that a specific gateway sender will be able to rebalance its connections and connect new remote gateway receivers. Invoking this command redistributes gateway sender connections more evenly among all the gateway receivers.
Another option is to use the GatewaySender.rebalance
Java API.
As an example, assume the following scenario:
You can then execute the following in gfsh to see the effects of rebalancing:
gfsh -e "connect --locator=localhost[10331]" -e "list gateways"
...
(2) Executing - list gateways
GatewaySender Section
GatewaySender Id | Member | Remote Cluster Id | Type | Status | Queued Events | Receiver Location
---------------- | --------------------------------- | ----------------- | -------- | ------- | ------------- | -----------------
ln | mymac(ny-1:88641)<v2>:33491 | 2 | Parallel | Running | 0 | mymac:5037
ln | mymac(ny-2:88705)<v3>:29329 | 2 | Parallel | Running | 0 | mymac:5064
ln | mymac(ny-3:88715)<v4>:36808 | 2 | Parallel | Running | 0 | mymac:5132
ln | mymac(ny-4:88724)<v5>:52993 | 2 | Parallel | Running | 0 | mymac:5324
GatewayReceiver Section
Member | Port | Sender Count | Senders Connected
--------------------------------- | ---- | ------------ | --------------------------------------------------------------------------
mymac(ny-1:88641)<v2>:33491 | 5057 | 24 | ["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-2:88662)<v3>:12796","mymac(ln-3:88672)<v4>:43675"]
mymac(ny-2:88705)<v3>:29329 | 5082 | 0 | []
mymac(ny-3:88715)<v4>:36808 | 5371 | 0 | []
mymac(ny-4:88724)<v5>:52993 | 5247 | 0 | []
Execute the load-balance command:
gfsh -e "connect --locator=localhost[10441]" -e "load-balance gateway-sender --id=ny"...
(2) Executing - load-balance gateway-sender --id=ny
Member | Result | Message
--------------------------------- | ------ |--------------------------------------------------------------------------
mymac(ln-1:88651)<v2>:48277 | OK | GatewaySender ny is rebalanced on member mymac(ln-1:88651)<v2>:48277
mymac(ln-4:88681)<v5>:42784 | OK | GatewaySender ny is rebalanced on member mymac(ln-4:88681)<v5>:42784
mymac(ln-3:88672)<v4>:43675 | OK | GatewaySender ny is rebalanced on member mymac(ln-3:88672)<v4>:43675
mymac(ln-2:88662)<v3>:12796 | OK | GatewaySender ny is rebalanced on member mymac(ln-2:88662)<v3>:12796
Listing gateways in ny again shows the connections are spread much better among the receivers.
gfsh -e "connect --locator=localhost[10331]" -e "list gateways"...
(2) Executing - list gateways
GatewaySender Section
GatewaySender Id | Member | Remote Cluster Id | Type | Status | Queued Events | Receiver Location
---------------- | --------------------------------- | ---------------- | -------- | ------- | ------------- | -----------------
ln | mymac(ny-1:88641)<v2>:33491 | 2 | Parallel | Running | 0 | mymac:5037
ln | mymac(ny-2:88705)<v3>:29329 | 2 | Parallel | Running | 0 | mymac:5064
ln | mymac(ny-3:88715)<v4>:36808 | 2 | Parallel | Running | 0 | mymac:5132
ln | mymac(ny-4:88724)<v5>:52993 | 2 | Parallel | Running | 0 | mymac:5324
GatewayReceiver Section
Member | Port | Sender Count | Senders Connected
--------------------------------- | ---- | ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------
mymac(ny-1:88641)<v2>:33491 | 5057 | 9 |["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-3:88672)<v4>:43675","mymac(ln-2:88662)<v3>:12796"]
mymac(ny-2:88705)<v3>:29329 | 5082 | 4 |["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-3:88672)<v4>:43675"]
mymac(ny-3:88715)<v4>:36808 | 5371 | 4 |["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-3:88672)<v4>:43675"]
mymac(ny-4:88724)<v5>:52993 | 5247 | 3 |["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-3:88672)<v4>:43675"]
Running the load balance command in site ln again produces even better balance.
Member | Port | Sender Count | Senders Connected
--------------------------------- | ---- | ------------ |-------------------------------------------------------------------------------------------------------------------------------------------------
mymac(ny-1:88641)<v2>:33491 | 5057 | 7 |["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-2:88662)<v3>:12796","mymac(ln-3:88672)<v4>:43675"]
mymac(ny-2:88705)<v3>:29329 | 5082 | 3 |["mymac(ln-1:88651)<v2>:48277","mymac(ln-3:88672)<v4>:43675","mymac(ln-2:88662)<v3>:12796"]
mymac(ny-3:88715)<v4>:36808 | 5371 | 5 |["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-2:88662)<v3>:12796","mymac(ln-3:88672)<v4>:43675"]
mymac(ny-4:88724)<v5>:52993 | 5247 | 6 |["mymac(ln-1:88651)<v2>:48277","mymac(ln-4:88681)<v5>:42784","mymac(ln-2:88662)<v3>:12796","mymac(ln-3:88672)<v4>:43675"]
You may have a WAN deployment in which you do not want to expose the IP address and port of every gateway receiver to other sites, but instead expose just one IP address and port for all gateway receivers. This way, the internal topology of the site is hidden to other sites. This case is common in cloud deployments, in which a reverse proxy/load balancer distributes incoming requests to the site (in this case, replication requests) among the available servers (in this case, gateway receivers).
Tanzu GemFire supports this configuration by means of a particular use of the hostname-for-senders
, start-port
, and end-port
parameters of the gateway receiver.
In order to configure a WAN deployment that hides the gateway receivers behind the same IP address and port,
hostname-for-senders
parameter (the hostname or IP address to be used by clients to access them)start-port
and end-port
parameters (the ports to be used by clients to access them).The following example shows a deployment in which all gateway receivers of a site are accessed via the “gateway1.mycompany.com” hostname and port “1971”; every gateway receiver in the site must be configured as follows:
gfsh> create gateway-receiver --hostname-for-senders="gateway1.mycompany.com" --start-port=1971 --end-port=1971
The following output shows how the receiver side would look like after this configuration if four gateway receivers were configured:
Cluster-ny gfsh>list gateways
GatewayReceiver Section
Member | Port | Sender Count | Senders Connected
----------------------------------| ---- | ------------ | --------------------------------------------------------------------------------------------------------------------------------------
192.168.1.20(ny1:21901)<v1>:41000 | 1971 | 1 | 192.168.0.13(ln4:22520)<v4>:41005
192.168.2.20(ny2:22150)<v2>:41000 | 1971 | 2 | 192.168.0.13(ln2:22004)<v2>:41003, 192.168.0.13(ln3:22252)<v3>:41004
192.168.3.20(ny3:22371)<v3>:41000 | 1971 | 2 | 192.168.0.13(ln3:22252)<v3>:41004, 192.168.0.13(ln2:22004)<v2>:41003
192.168.4.20(ny4:22615)<v4>:41000 | 1971 | 3 | 192.168.0.13(ln4:22520)<v4>:41005, 192.168.0.13(ln1:21755)<v1>:41002, 192.168.0.13(ln1:21755)<v1>:41002
When the gateway senders on one site are started, they get the information about the gateway receivers of the remote site from the locators running on the remote site. The remote locator provides a list of gateway receivers to send replication events to (one element per gateway receiver running in the site), with all of them listening on the same hostname and port. From the gateway sender’s standpoint, it is as if only one gateway receiver is on the other side.
The following output shows the gateways information at the sender side, in which it can be seen that there is only one IP address/hostname and port for the receiver location (gateway1.mycompany.com:1971), while the reality is that there are four gateway receivers on the other side.
Cluster-ln gfsh>list gateways
GatewaySender Section
GatewaySender Id | Member | Remote Cluster Id | Type | Status | Queued Events | Receiver Location
---------------- | ----------------------------------| ----------------- | -------- | --------------------- | ------------- | ---------------------------
ny | 192.168.0.13(ln2:22004)<v2>:41003 | 2 | Parallel | Running and Connected | 0 | gateway1.mycompany.com:1971
ny | 192.168.0.13(ln3:22252)<v3>:41004 | 2 | Parallel | Running and Connected | 0 | gateway1.mycompany.com:1971
ny | 192.168.0.13(ln4:22520)<v4>:41005 | 2 | Parallel | Running and Connected | 0 | gateway1.mycompany.com:1971
ny | 192.168.0.13(ln1:21755)<v1>:41002 | 2 | Parallel | Running and Connected | 0 | gateway1.mycompany.com:1971
In order for the gateway senders to communicate with the remote gateway receivers, a reverse proxy/load balancer service must be in place in the deployment in order to receive the requests directed to the gateway receivers on the IP address and port configured, and to forward the requests to one of the gateway receivers on the remote site.