HAProxy with Keepalived configuration guide
Clone the HAProxy VM or install a new VM with the same configuration as the first deployed HAProxy.
Change Hostname and IP Address
Create VIP and point to main DNS record for vRealize Operations cluster. For example: acmevrops6.acme.com / 192.168.1.5)
You will now have 2x HAProxy load balancers running. For example: LB1/192.168.1.6 and LB2/192.168.1.7.
Verify HAProxy configuration is located on both the load balancers. You should be able to access either one and access vRealize Operations cluster successfully.
When both the HAProxies are confirmed working and contain identical configurations, you should configure the Keepalived to ensure that you have availability between the two load balancers.
SSH to LB1 which we will consider is the PRIMARY election.
yum install keepalived
You should configure the kernel to use a VIP to bind to vi /etc/sysctl.conf. Add the following line to the file
net.ipv4.ip_nonlocal_bind=1
For the kernel to pick up the new changes without rebooting, run the following command:
sysctl -p
Delete the file:
/etc/keepalived/keepalived.conf
Create a new file:
/etc/keepalived/keepalived.conf
In the new keepalived.conf file add the following
Master Node global_defs { router_id haproxy2 # The hostname of this host. } vrrp_script haproxy { script "killall -0 haproxy" interval 2 weight 2 } vrrp_instance 50 { virtual_router_id 50 advert_int 1 priority 50 state MASTER interface eth0 virtual_ipaddress { Virtual_IPaddress dev eth0 # The virtual IP address that will be shared between PRIMARY and SECONDARY } track_script { haproxy } }
Verify that above the Router_ID is the HOSTNAME of the local load balancer that you are setting up.
Verify that you have set up the correct network device, check if you are using eth0.
Verify that above the Virtual_IPaddress is the VIP address, and not the local IP address of the LB1 node.
Set the priority in increments of 50. In this example, the node has the highest priority, so it is set to 100. Verify that the node is set as the primary node.
Save the configuration file and restart the services.
You must activate the Keepalived service:
systemctl enable keepalived
Run the commands:
service keepalived restart service haproxy restart
To display if the node has the active load balancer IP, run:
ip a | grep eth0
If the system you are on displays the primary IP address of the load balancer, then this is the active system processing traffic. Verify that only one system displays the primary IP address of the load balancer.
If the address is present on both the machines, the configuration is incorrect, and both the machines might not be able to communicate with each other.
To configure the second LB2 Keepalived service perform the same steps as above and configure Keepalived service on LB2.
In the new keepalived.conf file add the following for the slave node:
global_defs { router_id haproxy4 # The hostname of this host! } vrrp_script haproxy { script "killall -0 haproxy" interval 2 weight 2 } vrrp_instance 50 { virtual_router_id 50 advert_int 1 priority 50 state BACKUP interface eth0 virtual_ipaddress { Virtual_IPaddress dev eth0 # The virtual IP address that will be shared betwee PRIMARY and SECONDARY. } track_script { haproxy } }
Verify that the Router_ID is the HOSTNAME of the local load balancer that you are setting up.
Verify that above the Virtual_IPaddress is the VIP address and not the local IP address of the LB1 node.
Set the priority in increments of 50. In this example, the node has the highest priority, so it is set to 100. Verify that the node is set as the backup.
Save the configuration file and restart the services.
You must activate the Keepalived service:
systemctl enable keepalived
Run the commands:
service keepalived restart
To display if the node has the active load balancer IP, run:
ip a | grep eth0
If the system you are on displays the primary IP address of the load balancer, then this is the active system processing traffic