If a primary node in your cluster fails and then it comes back online when you promote a standby node to be the new primary, this causes inaccuracies in the repmgr data. You can detect irregularities with the repmgr cluster show command.

Running repmgr cluster show on the Former Primary Node

In the following example, running the repmgr cluster show command on a former primary node that comes back online, results in the following system output.

 
     ID |  Name     | Role    | Status            | Upstream  | Location | Connection string
--------+-----------+---------+-------------------+-----------+----------+------------------------
 Node 1 | Node1 name| standby |!running as primary|Node 3 name| default  | host=host IP address user=repmgr dbname=repmgr
 Node 2 | Node2 name| standby |     running       |Node 3 name| default  | host=host IP address user=repmgr dbname=repmgr
 Node 3 | Node3 name| primary |   * running       |           | default  | host=host IP address user=repmgr dbname=repmgr
   WARNING: following issues were detected
  - node 1(ID: Node 1) is registered as standby but running as primary

In the example, node 1 is the current primary node in the cluster.

When you run the repmgr cluster show command, getting !running as primary status for a standby node indicates that a former primary node is running in the cluster. In this case, you must shut down and unregister the former primary node.

Running repmgr cluster show on the New Primary

In the following example, running the repmgr cluster show command on the new primary node results in the following system output.

     ID |  Name    | Role    | Status     | Upstream   | Location | Connection string
--------+----------+---------+------------+------------+----------+------------------------
 Node 1 |Node1 name| primary |  * running |            | default  | host=host IP address user=repmgr dbname=repmgr
 Node 2 |Node2 name| standby |    running | Node1 name | default  | host=host IP address user=repmgr dbname=repmgr
 Node 3 |Node3 name| primary |  ! running |            | default  | host=host IP address user=repmgr dbname=repmgr
   WARNING: following issues were detected
  - node 3(ID: Node 3) is running but the repmgr node record is inactive

In this case, the repmgr data is correct. It accurately indicates that node 1 is running and that it is the current primary node. The warning message about node 3, the former primary, indicates that the repmgr data on that node is not accurate.

Running repmgr cluster show After Promoting a Standby Node, Without Running standby follow on the Remaining Standby Nodes

In the following example, you can see the repmgr data on each node in a cluster in which the primary node failed. A standby was promoted manually using the repmgr standby promote command, but without running repmgr standby follow on the remaining standby nodes.

When you run repmgr cluster show on the new primary, the system output represents correct repmgr data, but the new primary node, node 2, is not followed by any standby nodes.

     ID |  Name    | Role    | Status    | Upstream   |Location | Connection string
--------+----------+---------+-----------+------------+---------+------------------------
 Node 1 |Node1 name| primary | * running |            | default | host=host IP address user=repmgr dbname=repmgr
 Node 2 |Node2 name| primary | ! running |            | default | host=host IP address user=repmgr dbname=repmgr
 Node 3 |Node3 name| standby |   running |Node 1 name | default | host=host IP address user=repmgr dbname=repmgr
   WARNING: following issues were detected
  - node 1(ID: Node 1) is running but the repmgr node record is inactive

Both node 1, which is the former primary, and node 3, which is the standby that follows the former primary, provide inaccurate repmgr data.

     ID |  Name    | Role    | Status              | Upstream |Location | Connection string
--------+----------+---------+---------------------+----------+---------+------------------------
 Node 1 |Node1 name| primary |   * running         |          | default | host=host IP address user=repmgr dbname=repmgr
 Node 2 |Node2 name| standby |! running as primary |Node1 name| default | host=host IP address user=repmgr dbname=repmgr
 Node 3 |Node3 name| standby |     running         |Node1 name| default | host=host IP address user=repmgr dbname=repmgr
   WARNING: following issues were detected
  - node 2(ID: Node 2) is registered as standby but running as primary

Running repmgr cluster show on a Standby Node

Running the command on a standby node that is following the current primary, results in a system output with accurate repmgr data that is identical to the data on the current primary.

Running the command on a standby node that is following the former primary, results in a system output with inaccurate repmgr data that is identical to the data on the former primary.

Log Entries

If a former primary that failed comes back online after you promote a standby node to be the new primary, the following entries appear in the update-repmgr-data.log file on all nodes with inaccurate repmgr data.

ERROR: An old primary is running in the repmgr cluster.
ERROR: Manual intervention is required to repair the repmgr cluster.
ERROR: The first step should be to shutdown and unregister the old primary.