After a failover occurs, master and replica vRealize Automation appliance nodes might not have the correct role assignment, which affects all services that require database write access.

In a high availability cluster of vRealize Automation appliances, you shut down or make the master database node inaccessible. You use the management console on another node to promote that node as the new master, which restores vRealize Automation database write access.

Later, you bring the old master node back online, but the Database tab in its management console still lists the node as the master node even though it is not. Attempts to use any node management console to clear the problem by officially promoting the old node back to master fail.

Results

When failover occurs, follow these guidelines when configuring old versus new master nodes.

  • Before promoting another node to master, remove the previous master node from the load balancer pool of vRealize Automation appliance nodes.

  • To have vRealize Automation bring an old master node back to the cluster, let the old machine come online. Then, open the new master management console. Look for the old node listed as invalid under the Database tab, and click its Reset button.

    After a successful reset, you may restore the old node to the load balancer pool of vRealize Automation appliance nodes.

  • To manually bring an old master node back to the cluster, bring the machine online, and join it to the cluster as if it were a new node. While joining, specify the newly promoted node as the primary node.

    After successfully joining, you may restore the old node to the load balancer pool of vRealize Automation appliance nodes.

  • Until you correctly reset or rejoin an old master node to the cluster, do not use its management console for cluster management operations, even if the node came back online.

  • After you correctly reset or rejoin, you may promote an old node back to master.