Deploying VMware Tanzu RabbitMQ for VMs through Ops Manager deploys a RabbitMQ cluster of 3 nodes by default. The deployment includes a single load balancer
haproxy which spreads connections on all of the default ports, for all of the shipped plugins across all of the machines within the cluster. The deployment occurs in a single availability zone (AZ).
The default configuration is for testing purposes only and VMware recommends that customers have a minimum of 3 RabbitMQ nodes and 2 HAProxy nodes
The diagram below shows the default Tanzu RabbitMQ pre-provisioned deployment.
VMware recommends that Tanzu RabbitMQ is deployed across at least two AZs. Scale RabbitMQ server nodes to an odd number that is greater than or equal to three.
Only use replication of queues where required as it can have a big impact on system performance.
The HAProxy job instance count should also be increased to match the number of AZs to ensure there is a HAProxy located in each AZ. This removes the HAProxy SPOF and provides further redundancy.
The diagram below shows the recommended Tanzu RabbitMQ pre-provisioned deployment. It shows that when you use this configuration, if a single HAProxy and single RabbitMQ node fail, your cluster can remain online and apps remain connected.
It is not possible to upgrade to this setup from the default deployment across a single AZ.
This is because the AZ setup cannot be changed after the tile has been deployed for the first time. This is to protect against data loss when moving jobs between AZs.
If you have deployed the tile across two AZs, but with a single HAProxy instance, you can migrate to this setup by deploying an additional HAProxy instance through Ops Manager. New or re-bound apps to the Tanzu RabbitMQ service instance see the IPs of both HAProxies immediately. Existing bound apps will continue to work, but only using the previously deployed HAProxy IP Address. They can be re-bound as required at your discretion.
This deployment builds upon the above recommended deployment and so follows the same upgrade paths. This allows you to replace the use of HAProxy with your own external load balancer. You might choose to do this to remove any knowledge of the topology of the Tanzu RabbitMQ setup from app developers.
The diagram below shows an advanced Tanzu RabbitMQ pre-provisioned deployment.
It is possible to first deploy with multiple HAProxy jobs, as per the recommended deployment, and later use your own external load balancer.
This can be achieved without downtime to your apps. Follow these steps to do so:
VCAP_SERVICES. Any existing service instances continue to use the HAProxy IP addresses in their
HAProxyjob in Ops Manager to 0.
This approach works as any existing bound apps have their
VCAP_SERVICES information cached in Cloud Controller and are only updated by a re-bind request.
If you are currently using an external load balancer, then you can move back to using HAProxies instead.
You can achieve this by following the above steps in reverse order and re-instating the HAProxy jobs.
The following table shows the default resource and IP requirements for installing the tile:
|Product||Resource||Instances||Core||Ram||Ephemeral||Persistent||Static IP||Dynamic IP|
|RabbitMQ||HAProxy for RabbitMQ||1||1||2048||4096||0||1||0|
|RabbitMQ||Tanzu RabbitMQ service broker||1||1||2048||4096||0||1||0|
|RabbitMQ||RabbitMQ on-demand broker||1||1||1024||8192||1024||0||1|
|RabbitMQ||Register On-Demand Service Broker||1||1||1024||2048||0||0||1|
|RabbitMQ||Deregister On-Demand Service Broker||1||1||1024||2048||0||0||1|
|RabbitMQ||Delete All Service Instances||1||1||1024||2048||0||0||1|
|RabbitMQ||Upgrade All Service Instances||1||1||1024||2048||0||0||1|
|RabbitMQ||Recreate All Service Instances||1||1||1024||2048||0||0||1|
RabbitMQ Nodecan be increased if required.