Tanzu GemFire handles network outages by using a weighting system to determine whether the remaining available members have a sufficient quorum to continue as a cluster.
Individual members are each assigned a weight, and the quorum is determined by comparing the total weight of currently responsive members to the previous total weight of responsive members.
Your cluster can split into separate running systems when members lose the ability to see each other. The typical cause of this problem is a failure in the network. When a partitioned system is detected, only one side of the system keeps running and the other side automatically shuts down.
The network partitioning detection feature is enabled by default with a true value for the enable-network-partition-detection
property. See Configure VMware Tanzu GemFire to Handle Network Partitioning for details. Quorum weight calculations are always performed and logged regardless of this configuration setting.
The overall process for detecting a network partition is as follows:
Members join and if necessary, depart the cluster:
While members are joining the system, it is possible that members are also leaving or being removed through the normal failure detection process. Failure detection removes unresponsive or slow members. See Managing Slow Receivers and Failure Detection and Membership Views for descriptions of the failure detection process. If a new membership view is sent out that includes one or more failed processes, the coordinator will log the new weight calculations. At any point, if quorum loss is detected due to unresponsive processes, the coordinator will also log a severe level message to identify the failed processes:
Possible loss of quorum detected due to loss of {0} cache processes: {1}
where {0} is the number of processes that failed and {1} lists the processes.
Whenever the coordinator is alerted of a membership change (a member either joins or leaves the cluster), the coordinator generates a new membership view. The membership view is generated by a two-phase protocol:
In the first phase, the membership coordinator sends out a view preparation message to all members and waits for a view preparation acknowledgement from each member. If the coordinator does not receive an ack message from a member (within a specified timeout period – see below), the coordinator attempts to connect to the member’s failure-detection socket. If the coordinator cannot connect to the member’s failure-detection socket, the coordinator declares the member dead and starts the membership view protocol again from the beginning.
The timeout period for acknowledgement of a view change, the ack view timeout period, is based on the value of the member-timeout
system property, and defaults to about 12 seconds (12437ms). The allowable range for the view ack timeout setting is 1500ms to 12437ms.
In the second phase, the coordinator sends out the new membership view to all members that acknowledged the view preparation message or passed the connection test.
Each time the membership coordinator sends a view, each member calculates the total weight of members in the current membership view and compares it to the total weight of the previous membership view. Some conditions to note:
With a default value of enable-network-partition-detection
, any member that detects a loss of quorum declares a network partition event. A loss of quorum happens when half or more of the member weight is lost in a single view change. In that case, the coordinator sends a network-partitioned-detected UDP message to all members (even to the non-responsive ones) and then closes the cluster with a ForcedDisconnectException
. If a member fails to receive the message before the coordinator closes the system, the member is responsible for detecting the event on its own.
The presumption is that when a network partition is declared, the members that comprise a quorum will continue operations. The surviving members elect a new coordinator, designate a lead member, and so on.