HAProxy with Keepalived configuration guide

  1. Clone the HAProxy VM or install a new VM with the same configuration as the first deployed HAProxy.

  2. Change Hostname and IP Address

  3. Create VIP and point to main DNS record for VMware Aria Operations cluster. For example: acmevrops6.acme.com / 192.168.1.5)

    You will now have 2x HAProxy load balancers running. For example: LB1/192.168.1.6 and LB2/192.168.1.7.

  4. Verify HAProxy configuration is located on both the load balancers. You must be able to access either one and access VMware Aria Operations cluster successfully.

    When both the HAProxies are confirmed working and contain identical configurations, you must configure the Keepalived to ensure that you have availability between the two load balancers.

  5. SSH to LB1 which we will consider is the PRIMARY election.

    yum install keepalived
  6. You must configure the kernel to use a VIP to bind to vi /etc/sysctl.conf. Add the following line to the file

    net.ipv4.ip_nonlocal_bind=1
  7. For the kernel to pick up the new changes without rebooting, run the following command:

    sysctl -p
  8. Delete the file:

    /etc/keepalived/keepalived.conf
  9. Create a new file:

    /etc/keepalived/keepalived.conf 
  10. In the new keepalived.conf file, add the following

    Master Node
    global_defs {
      router_id haproxy2 # The hostname of this host.
    }
    vrrp_script haproxy {
      script "killall -0 haproxy"
      interval 2
      weight 2
    }
    vrrp_instance 50 {
      virtual_router_id 50
      advert_int 1
      priority 50
      state MASTER
      interface eth0
      virtual_ipaddress {
         Virtual_IPaddress dev eth0 # The virtual IP address that will be shared between PRIMARY and SECONDARY
      }
      track_script {
          haproxy
      }
    }
    
  11. Verify that above the Router_ID is the HOSTNAME of the local load balancer that you are setting up.

  12. Verify that you have set up the correct network device, check if you are using eth0.

  13. Verify that above the Virtual_IPaddress is the VIP address, and not the local IP address of the LB1 node.

  14. Set the priority in increments of 50. In this example, the node has the highest priority, so it is set to 100. Verify that the node is set as the primary node.

  15. Save the configuration file and restart the services.

  16. You must activate the Keepalived service:

    systemctl enable keepalived
  17. Run the commands:

    service keepalived restart
    service haproxy restart
  18. To display if the node has the active load balancer IP, run:

    ip a | grep eth0
  19. If the system you are on displays the primary IP address of the load balancer, then this is the active system processing traffic. Verify that only one system displays the primary IP address of the load balancer.

  20. If the address is present on both the machines, the configuration is incorrect, and both the machines might not be able to communicate with each other.

  21. To configure the second LB2 Keepalived service perform the same steps as above and configure Keepalived service on LB2.

  22. In the new keepalived.conf file, add the following for the slave node:

    global_defs {
      router_id haproxy4 # The hostname of this host!
    }
    vrrp_script haproxy {
      script "killall -0 haproxy"
      interval 2
      weight 2
    }
    vrrp_instance 50 {
      virtual_router_id 50
      advert_int 1
      priority 50
      state BACKUP
      interface eth0
      virtual_ipaddress {
         Virtual_IPaddress dev eth0 # The virtual IP address that will be shared betwee PRIMARY and SECONDARY.
      }
      track_script {
        haproxy
      }
    }
    
  23. Verify that the Router_ID is the HOSTNAME of the local load balancer that you are setting up.

  24. Verify that above the Virtual_IPaddress is the VIP address and not the local IP address of the LB1 node.

  25. Set the priority in increments of 50. In this example, the node has the highest priority, so it is set to 100. Verify that the node is set as the backup.

  26. Save the configuration file and restart the services.

  27. You must activate the Keepalived service:

    systemctl enable keepalived
  28. Run the commands:

    service keepalived restart
  29. To display if the node has the active load balancer IP, run:

    ip a | grep eth0  
  30. If the system you are on displays the primary IP address of the load balancer, then this is the active system processing traffic