This topic explains how VMware Tanzu GemFire uses failure detection to remove unresponsive members from membership views.

Failure Detection

Network partitioning has a failure detection protocol that is not subject to hanging when NICs or machines fail. Failure detection has each member observe messages from the peer to its right within the membership view (see Membership Views below for the view layout).

A member that suspects the failure of its peer to the right sends a heartbeat request to the suspect member. With no response from the suspect member, the suspicious member broadcasts a SuspectMembersMessage message to all other members.

The coordinator attempts to connect to the suspect member. If the connection attempt is unsuccessful, the suspect member is removed from the membership view.

The suspect member is sent a message to disconnect from the cluster and close the cache. In parallel to the receipt of the SuspectMembersMessage, a distributed algorithm promotes the leftmost member within the view to act as the coordinator, if the coordinator is the suspect member.

Failure detection processing is also initiated on a member if the gemfire.properties ack-wait-threshold elapses before receiving a response to a message, if a TCP/IP connection cannot be made to the member for peer-to-peer (P2P) messaging, and if no other traffic is detected from the member.

Note: The TCP connection ping is not used for connection keep alive purposes. It is only used to detect failed members. For a TCP keep alive configuration, see TCP/IP KeepAlive Configuration.

If a new membership view is sent out that includes one or more failed members, the coordinator will log new quorum weight calculations. At any point, if quorum loss is detected due to unresponsive processes, the coordinator will also log a severe level message to identify the failed members:

Possible loss of quorum detected due to loss of {0} cache processes: {1}

in which {0} is the number of processes that failed and {1} lists the members (cache processes).

Membership Views

The following is a sample membership view:

[info 2012/01/06 11:44:08.164 PST bridgegemfire1 <Membership Messenger Non-Blocking> tid=0x1f] 
Membership: received new view: View[ent(5767)<v0>:8700|16] [ent(5767)<v0>:8700/44876, 
ent(5829)<v1>:48034/55334, ent(5875)<v2>:4738/54595, ent(5822)<v5>:49380/39564, 
ent(8788)<v7>:24136/53525]

The components of the membership view are as follows:

  • The first part of the view ([ent(5767)<v0>:8700|16] in the example above) corresponds to the view ID. It identifies:
    • the address and processId of the membership coordinator: ent(5767) in example above.
    • the view-number (<vXX>) of the membership view that the member first appeared in: <v0> in example above.
    • membership-port of the membership coordinator: 8700 in the example above.
    • view-number: 16 in the example above
  • The second part of the view lists all of the member processes in the current view. [ent(5767)<v0>:8700/44876, ent(5829)<v1>:48034/55334, ent(5875)<v2>:4738/54595, ent(5822)<v5>:49380/39564, ent(8788)<v7>:24136/53525] in the example above.
  • The overall format of each listed member is: Address(processId)<vXX>:membership-port/distribution-port. The membership coordinator is almost always the first member in the view and the rest are ordered by age.
  • The membership-port is the port used for membership communication. The distribution-port is the TCP/IP port that is used for cache messaging.
  • Each member watches the member to its right for failure detection purposes.
check-circle-line exclamation-circle-line close-line
Scroll to top icon