When there is a problem with the vRealize Automation appliance Postgres database, you manually fail over to a replica vRealize Automation appliance node in the cluster.
About this task
Follow these steps when the Postgres database on the master vRealize Automation appliance node fails or stops running.
Configure a cluster of vRealize Automation appliance nodes. Each node hosts a copy of the embedded Postgres appliance database.
- Remove the master node IP address from the external load balancer.
- Log in to the vRealize Automation appliance management interface as root.
- Click .
- From the list of database nodes, locate the replica node with the lowest priority.
Replica nodes appear in ascending priority order.
- Click Promote and wait for the operation to finish.
When finished, the replica node is listed as the new master node.
- Correct issues with the former master node and add it back to the cluster:
Isolate the former master node.
Disconnect the node from its current network, the one that is routing to the remaining vRealize Automation appliance nodes. Select another NIC for management, or manage it directly from the virtual machine management console.
Recover the former master node.
Power the node on or otherwise correct the issue. For example, you might reset the virtual machine if it is unresponsive.
From a console session as root, stop the vpostgres service.
service vpostgres stop
Add the former master node back to its original network, the one that is routing to the other vRealize Automation appliance nodes.
From a console session as root, restart the haproxy service.
service haproxy restart
Log in to the new vRealize Automation appliance master node management interface as root.
Locate the former master node, and click Reset.
After a successful reset, restart the former master node.
With the former master powered on, verify that the following services are running.
haproxy horizon-workspace rabbitmq-server vami-lighttp vcac-server vco-server
Re-add the former master node to the external load balancer.
If a master node that was demoted to replica is still listed as master, you might need to manually re-join it to the cluster to correct the problem.