This topic describes how to set up a unidirectional WAN connection, using TLS encryption, between two VMware Tanzu GemFire for Tanzu Application Service instances in two different foundations, such that all operations in Cluster A are replicated in Cluster B. Two design patterns that use unidirectional replication are described in Blue-Green Disaster Recovery and CQRS Pattern Across a WAN.
Log into Foundation A using Foundation A Cloud Foundry credentials.
Use the cf create-service
command to create a service instance, which we will call Cluster A in this example. In the cf create-service
command, use the -c
option with the argument "tls" : true
to enable TLS. This example also uses the -c
option to set the distributed_system_id
of Cluster A to “1”:
$ cf create-service p-cloudcache wan-cluster wan1 -c '{ "tls" : true, "distributed_system_id" : 1 }'
Verify the completion of service creation prior to continuing to the next step. Output from the cf services
command will show the last operation
as create succeeded
when service creation is completed.
Log into Foundation B using Foundation B Cloud Foundry credentials.
Use the cf create-service
command to create a service instance, which we will call Cluster B. In the cf create-service
command, use the -c
option with the argument "tls" : true
to enable TLS. This example also uses the -c
option to set the distributed_system_id
of Cluster B to “2”:
$ cf create-service p-cloudcache wan-cluster wan2 -c '{ "tls" : true, "distributed_system_id" : 2 }'
Verify the completion of service creation prior to continuing to the next step. Output from the cf services
command will show the last operation
as create succeeded
when service creation is completed.
While logged in to Foundation A, create a service key for Cluster A. The contents of the service key differ based upon the cluster configuration, such as whether an external authentication such as LDAP via User Account and Authentication (UAA) has been configured.
$ cf create-service-key wan1 k1 Creating service key wan1 for service instance k1 ... OK
The service key contains generated credentials, in a JSON element called remote_cluster_info
, that enable other clusters (Cluster B in this example) to communicate with Cluster A:
$ cf service-key wan1 k1 Getting key k1 for service instance wan1 as admin... { "distributed_system_id": "1", "gfsh_login_string": "connect --url=https://cloudcache-url.com/gemfire/v1 --user=cluster_operator_user --password=pass --skip-ssl-validation", "locators": [ "id1.locator.services-subnet.service-instance-id.bosh[55221]", "id2.locator.services-subnet.service-instance-id.bosh[55221]", "id3.locator.services-subnet.service-instance-id.bosh[55221]" ], "remote_cluster_info": { "recursors": { "services-subnet.service-instance-id.bosh": [ "10.0.8.6:1053", "10.0.8.7:1053", "10.0.8.5:1053" ] }, "remote_locators": [ "id1.locator.services-subnet.service-instance-id.bosh[55221]", "id2.locator.services-subnet.service-instance-id.bosh[55221]", "id3.locator.services-subnet.service-instance-id.bosh[55221]" ], "trusted_sender_credentials": [ { "password": "gws-GHI-password", "username": "gateway_sender_GHI" } ] }, "tls-enabled": "true", "urls": { "gfsh": "https://cloudcache-1.example.com/gemfire/v1", "pulse": "https://cloudcache-1.example.com/pulse" }, "users": [ { "password": "cl-op-ABC-password", "roles": [ "cluster_operator" ], "username": "cluster_operator_ABC" }, { "password": "dev-DEF-password", "roles": [ "developer" ], "username": "developer_DEF" } ], "wan": {} }
While logged in to Foundation B, create a service key for Cluster B:
$ cf create-service-key wan2 k2 Creating service key wan2 for service instance k2 ... OK
As above, the service key contains generated credentials, in a JSON element called remote_cluster_info
, that enable other clusters (Cluster A in this example) to communicate with Cluster B:
$ cf service-key wan2 k2 Getting key k2 for service instance destination as admin... { "distributed_system_id": "2", "gfsh_login_string": "connect --url=https://cloudcache-url.com/gemfire/v1 --user=cluster_operator_user --password=pass --skip-ssl-validation", "locators": [ "id1.locator.services-subnet-2.service-instance-id-2.bosh[55221]", "id2.locator.services-subnet-2.service-instance-id-2.bosh[55221]", "id3.locator.services-subnet-2.service-instance-id-2.bosh[55221]" ], "remote_cluster_info": { "recursors": { "services-subnet-2.service-instance-id-2.bosh": [ "10.1.16.7:1053", "10.1.16.6:1053", "10.1.16.8:1053" ] }, "remote_locators": [ "id1.locator.services-subnet-2.service-instance-id-2.bosh[55221]", "id2.locator.services-subnet-2.service-instance-id-2.bosh[55221]", "id3.locator.services-subnet-2.service-instance-id-2.bosh[55221]" ], "trusted_sender_credentials": [ { "password": "gws-PQR-password", "username": "gateway_sender_PQR" } ] }, "tls-enabled": "true", "urls": { "gfsh": "https://cloudcache-2.example.com/gemfire/v1", "pulse": "https://cloudcache-2.example.com/pulse" }, "users": [ { "password": "cl-op-JKL-password", "roles": [ "cluster_operator" ], "username": "cluster_operator_JKL" }, { "password": "dev-MNO-password", "roles": [ "developer" ], "username": "developer_MNO" } ], "wan": {} }
To establish unidirectional communication between Clusters A and B, The remote_clusters
element in each must be updated with selected contents of the other’s remote_cluster_info
, which contains three elements:
recursors
array and the remote_locators
array enable communication between clusters.trusted_sender_credentials
element identifies a cluster from which data can be accepted for replication. For example, when Cluster B has been updated with Cluster A’s trusted_sender_credentials
, then Cluster B can accept and store data sent from Cluster A.In order to implement a unidirectional WAN connection in which Cluster A sends data to Cluster B:
recursors
array and the remote_locators
array of the other.trusted_sender_credentials
of Cluster A.trusted_sender_credentials
of Cluster B.Follow these steps to configure a unidirectional connection in which Cluster A is the active member, and Cluster B accepts data for replication, but does not send operations to Cluster A:
While logged into Foundation A, update the Cluster A service instance, using the -c
option to specify a remote_clusters
element that includes the contents of the Cluster B service key remote_cluster_info
element, including Cluster B’s recursors
array and the remote_locators
array, but NOT Cluster B’s trusted_sender_credentials
. This allows Cluster A to send messages to Cluster B, but blocks data flow from Cluster B to Cluster A.
$ cf update-service wan1 -c ' { "remote_clusters": [ { "recursors": { "services-subnet-2.service-instance-id-2.bosh": [ "10.1.16.7:1053", "10.1.16.6:1053", "10.1.16.8:1053" ] }, "remote_locators": [ "id1.locator.services-subnet-2.service-instance-id-2.bosh[55221]", "id2.locator.services-subnet-2.service-instance-id-2.bosh[55221]", "id3.locator.services-subnet-2.service-instance-id-2.bosh[55221]" ] } ] }' Updating service instance wan1 as admin
Verify the completion of the service update prior to continuing to the next step. Output from the cf services
command will show the last operation
as update succeeded
when service update is completed.
While logged in to Foundation B, update the Cluster B service instance, using the -c
option to specify a remote_clusters
element that includes the contents of the Cluster A service key remote_cluster_info
element, including the recursors
array, remote_locators
array, and trusted_sender_credentials
. This allows Cluster B to communicate with Cluster A, and to accept data from Cluster A:
$ cf update-service wan2 -c ' { "remote_clusters": [ { "recursors": { "services-subnet.service-instance-id.bosh": [ "10.0.8.5:1053", "10.0.8.7:1053", "10.0.8.6:1053" ] }, "remote_locators":[ "id1.locator.services-subnet.service-instance-id.bosh[55221]", "id2.locator.services-subnet.service-instance-id.bosh[55221]", "id3.locator.services-subnet.service-instance-id.bosh[55221]" ], "trusted_sender_credentials":[ { "username": "gateway_sender_GHI", "password":"gws-GHI-password" } ] } ] }' Updating service instance wan2 as admin
Verify the completion of the service update prior to continuing to the next step. Output from the cf services
command will show the last operation
as update succeeded
when service update is completed.
To verify that each service instance has been correctly updated, delete and recreate the cluster service key. The recreated service key will have the same user identifiers and passwords as its predecessor, and will reflect the changes you specified in the recent cf update-service
commands. In particular the wan{}
elements at the end of each cluster’s service key should be populated with the other cluster’s remote connection information. For example, to verify that the Cluster A service key was updated correctly, log in as Cluster A administrator and issue these commands to delete and recreate the Cluster A service key:
$ cf delete-service-key wan1 k1 ... $ cf create-service-key wan1 k1
Verify that the wan{}
field of the Cluster A service key contains a remote_clusters
element which specifies contact information for Cluster B, including Cluster B’s remote_locators
array, but NOT Cluster B’s trusted_sender_credentials
:
$ cf service-key wan1 k1 Getting key k1 for service instance wan1 ... { "distributed_system_id": "1", "gfsh_login_string": "connect --url=https://cloudcache-url.com/gemfire/v1 --user=cluster_operator_user --password=pass --skip-ssl-validation", "locators": [ "id1.locator.services-subnet.service-instance-id.bosh[55221]", "id2.locator.services-subnet.service-instance-id.bosh[55221]", "id3.locator.services-subnet.service-instance-id.bosh[55221]" ], "remote_cluster_info": { "recursors": { "services-subnet.service-instance-id.bosh": [ "10.0.8.6:1053", "10.0.8.7:1053", "10.0.8.5:1053" ] }, "remote_locators": [ "id1.locator.services-subnet.service-instance-id.bosh[55221]", "id2.locator.services-subnet.service-instance-id.bosh[55221]", "id3.locator.services-subnet.service-instance-id.bosh[55221]" ], "trusted_sender_credentials": [ { "password": "gws-GHI-password", "username": "gateway_sender_GHI" } ] }, "tls-enabled": "true", "urls": { "gfsh": "https://cloudcache-1.example.com/gemfire/v1", "pulse": "https://cloudcache-1.example.com/pulse" }, "users": [ { "password": "cl-op-ABC-password", "roles": [ "cluster_operator" ], "username": "cluster_operator_ABC" }, { "password": "dev-DEF-password", "roles": [ "developer" ], "username": "developer_DEF" } ], "wan": { "remote_clusters": [ { "remote_locators": [ "id1.locator.services-subnet-2.service-instance-id-2.bosh[55221]", "id2.locator.services-subnet-2.service-instance-id-2.bosh[55221]", "id3.locator.services-subnet-2.service-instance-id-2.bosh[55221]" ] } ] } }
Similarly, to verify that the Cluster B service key was updated correctly, log in as Cluster B administrator and issue these commands to delete and recreate the Cluster B service key:
$ cf delete-service-key wan2 k2 ... $ cf create-service-key wan2 k2
Verify that the wan{}
field of the Cluster B service key contains a remote_clusters
element which specifies contact information for Cluster A, including Cluster A’s remote_locators
array and trusted_sender_credentials
:
$ cf service-key wan2 k2 Getting key k2 for service instance wan2 ... { "distributed_system_id": "2", "gfsh_login_string": "connect --url=https://cloudcache-url.com/gemfire/v1 --user=cluster_operator_user --password=pass --skip-ssl-validation", "locators": [ "id1.locator.services-subnet-2.service-instance-id-2.bosh[55221]", "id2.locator.services-subnet-2.service-instance-id-2.bosh[55221]", "id3.locator.services-subnet-2.service-instance-id-2.bosh[55221]" ], "remote_cluster_info": { "recursors": { "services-subnet-2.service-instance-id-2.bosh": [ "10.1.16.7:1053", "10.1.16.6:1053", "10.1.16.8:1053" ] }, "remote_locators": [ "id1.locator.services-subnet-2.service-instance-id-2.bosh[55221]", "id2.locator.services-subnet-2.service-instance-id-2.bosh[55221]", "id3.locator.services-subnet-2.service-instance-id-2.bosh[55221]" ], "trusted_sender_credentials": [ { "password": "gws-PQR-password", "username": "gateway_sender_PQR" } ] }, "tls-enabled": "true", "urls": { "gfsh": "https://cloudcache-2.example.com/gemfire/v1", "pulse": "https://cloudcache-2.example.com/pulse" }, "users": [ { "password": "cl-op-JKL-password", "roles": [ "cluster_operator" ], "username": "cluster_operator_JKL" }, { "password": "dev-MNO-password", "roles": [ "developer" ], "username": "developer_MNO" } ], "wan": { "remote_clusters": [ { "remote_locators": [ "id1.locator.services-subnet.service-instance-id.bosh[55221]", "id2.locator.services-subnet.service-instance-id.bosh[55221]", "id3.locator.services-subnet.service-instance-id.bosh[55221]" ], "trusted_sender_credentials": [ "gateway_sender_GHI" ] } ] } }
Create a gateway sender on the active cluster that sends to the passive cluster, then create regions on each cluster that have the same name and type. Note: the gateway sender must be created before the region that uses it, so there is a window in which region operations that occur after the region is created on Cluster A, but before the corresponding region is created on Cluster B, will be lost.
While logged in to Foundation A, use gfsh to create the Cluster A gateway sender and the region.
Create the Cluster A gateway sender. The required remote-distributed-system-id
option identifies the distributed-system-id
of the destination cluster. It is 2 for this example:
gfsh>create gateway-sender --id=send_to_2 --remote-distributed-system-id=2 --enable-persistence=true
Create the Cluster A region. The gateway-sender-id
associates region operations with a specific gateway sender. The region must have an associated gateway sender in order to propagate region events across the WAN.
gfsh>create region --name=regionX --gateway-sender-id=send_to_2 --type=PARTITION_REDUNDANT
While logged in to Foundation B, use gfsh to create the Cluster B region. (Because Cluster B is the passive member in this unidirectional connection, it does not need a gateway sender.)
Create the Cluster B region:
gfsh>create region --name=regionX --type=PARTITION_REDUNDANT
This verification uses gfsh to put a region entry into Cluster A, in order to observe that the entry appears in Cluster B.
While logged in to Foundation A, run gfsh and connect following the instructions in Connect with gfsh over HTTPS with a role that is able to manage both the cluster and the data.
Verify that Cluster A has a complete set of gateway senders and gateway receivers:
gfsh>list gatewaysAlso verify that there are no queued events in the gateway senders.
Log in to Foundation B in a separate terminal window, then run gfsh and connect following the instructions in Connect with gfsh over HTTPS with a role that is able to manage both the cluster and the data.
Verify that Cluster B has a complete set of gateway receivers (Cluster B should have no gateway senders):
gfsh>list gateways
From the Cluster A gfsh connection, use gfsh to perform a put
operation. This example assumes that both the key and the value for the entry are strings. The gfsh help put
command shows the put command’s options. Here is an example:
gfsh>put --region=regionX --key=mykey1 --value=stringvalue1 Result : true Key Class : java.lang.String Key : mykey1 Value Class : java.lang.String Old Value : null
Wait approximately 30 seconds, then use the Cluster B gfsh connection to perform a get
of the same key that was put into the region on Cluster A.
gfsh>get --region=regionX --key=mykey1 Result : true Key Class : java.lang.String Key : mykey1 Value Class : java.lang.String Value : "stringvalue1"
You should see that Cluster B has the value.
From the Cluster A gfsh connection, remove the test entry from Cluster A. WAN replication will cause the removal of the test entry from Cluster B.
gfsh>remove --region=regionX --key=mykey1 Result : true Key Class : java.lang.String Key : mykey1