This topic gives you information about clustering and network partitions for VMware Tanzu RabbitMQ for Tanzu Application Service. This applies to both the on-demand and pre-provisioned services.

Clustering in Tanzu RabbitMQ for Tanzu Application Service

In Tanzu RabbitMQ for Tanzu Application Service, the RabbitMQ broker is always deployed as a cluster of one or more virtual machines (nodes). A RabbitMQ broker is a logical grouping of one or several Erlang nodes, each running the RabbitMQ application and sharing users, virtual hosts, queues, exchanges, bindings, and runtime parameters.

What is Replicated between nodes in a RabbitMQ cluster?

All data/state required for the operation of a RabbitMQ broker is replicated across all nodes. An exception to this are message queues, which by default reside on one node, though they are visible and reachable from all nodes. This means that the RabbitMQ cluster may be available and serving requests, while an individual queue residing on a single node is offline.

Replicating message queues across nodes is an expensive operation and should only be done to the extent needed by the application. To understand more about replicating queues across nodes in a cluster, see the documentation on high availability.

Automatic Network Partition Behaviors in RabbitMQ Clusters

The Tanzu RabbitMQ for Tanzu Application Service tile uses the pause_minority option for handling cluster partitions by default. This ensures data integrity by pausing the partition of the cluster in the minority, and resumes it with the data from the majority partition. You must maintain more than two nodes. If there is a partition when you only have two nodes, both nodes immediately pause.

You can also choose the autoheal option in the Pre-Provisioned RabbitMQ tab. In this mode, if a partition occurs, RabbitMQ automatically decides on a winning partition, and restarts all nodes that are not in the winning partition. This option allows you to continue to receive connections to both parts of partitions.

Detecting a Network Partition

When a network partition occurs, a log message is written to the RabbitMQ node log:

=ERROR REPORT==== 15-Oct-2012::18:02:30 ===
Mnesia(rabbit@da3be74c053640fe92c6a39e2d7a5e46): ** ERROR ** mnesia_event got
    {inconsistent_database, running_partitioned_network, rabbit@21b6557b73f343201277dbf290ae8b79}

You can also run the rabbitmqctl cluster_status command on any of the RabbitMQ nodes to see the network partition. To run rabbitmqctl cluster_status, do the following:

  1. $ sudo su -
  2. $ cd /var/vcap/packages
  3. $ export PATH=$PATH:erlang/bin/
  4. $ cd rabbitmq-server/bin/
  5. $ ./rabbitmqctl cluster_status

     [...
     {partitions,
         [{rabbit@da3be74c053640fe92c6a39e2d7a5e46,
              [rabbit@21b6557b73f343201277dbf290ae8b79]}]}]
    

Recovering

Because the Tanzu RabbitMQ for Tanzu Application Service tile uses the pause_minority option, minority nodes recover automatically after the partition is resolved. After a node recovers, it resumes accessing the queue along with data from the queues on the other nodes. However, if your queues use ha-mode: all, they only synchronize fully after consuming all the messages created while the node was down. This is similar to how messages synchronize when you create a new queue.

Manually Synchronizing after a Partition

After a network partition, a queue on a minority node synchronizes after consuming all the messages created while it was down. You can also run the sync_queue command to synchronize a queue manually. To run sync_queue, do the following on each node:

  1. $ sudo su -
  2. $ cd /var/vcap/packages
  3. $ export PATH=$PATH:erlang/bin/
  4. $ cd rabbitmq-server/bin/
  5. $ ./rabbitmqctl list_queues
  6. $ ./rabbitmqctl sync_queue name
check-circle-line exclamation-circle-line close-line
Scroll to top icon