This topic describes network partitioning scenarios and what happens to the partitioned sides of the cluster.
In a network partitioning scenario, the “losing side” constitutes the cluster partition where the membership coordinator has detected that there is an insufficient quorum of members to continue.
The membership coordinator calculates membership weight change after sending out its view preparation message. If a quorum of members does not remain after the view preparation phase, the coordinator on the “losing side” declares a network partition event and sends a network-partition-detected UDP message to the members. The coordinator then closes its cluster with a ForcedDisconnectException
. If a member fails to receive the message before the coordinator closes the connection, it is responsible for detecting the event on its own.
When the losing side discovers that a network partition event has occurred, all peer members receive a RegionDestroyedException
with Operation
: FORCED_DISCONNECT
.
If a CacheListener
is installed, the afterRegionDestroy
callback is invoked with a RegionDestroyedEvent
, as shown in this example logged by the losing side’s callback. The peer member process IDs are 14291 (lead member) and 14296, and the locator is 14289.
[info 2008/05/01 11:14:51.853 PDT <CloserThread> tid=0x4a]
Invoked splitBrain.SBListener: afterRegionDestroy in client1 whereIWasRegistered: 14291
event.isReinitializing(): false
event.getDistributedMember(): thor(14291):40440/34132
event.getCallbackArgument(): null
event.getRegion(): /TestRegion
event.isOriginRemote(): false
Operation: FORCED_DISCONNECT
Operation.isDistributed(): false
Operation.isExpiration(): false
Peers still actively performing operations on the cache may see ShutdownException
s or CacheClosedException
s with Caused by: ForcedDisconnectException
.
When a member is isolated from all locators, it is unable to receive membership view changes. It can’t know if the current coordinator is present or, if it has left, whether there are other members available to take over that role. In this condition, a member will eventually detect the loss of all other members and will use the loss threshold to determine whether it should shut itself down. In the case of a cluster with 2 locators and 2 cache servers, the loss of communication with the non-lead cache server plus both locators would result in this situation and the remaining cache server would eventually shut itself down.