The RabbitMQ persistence layer is intended to provide reasonably good throughput in the majority of situations without configuration. However, some configuration is sometimes useful. This guide covers a few configurable values that affect throughput, latency and I/O characteristics of a node. Consider reading the entire guide and get accustomed to benchmarking with PerfTest before drawing any conclusions.
Some related guides include:
First, some background: both persistent and transient messages can be written to disk. Persistent messages will be written to disk as soon as they reach the queue, while transient messages will be written to disk only so that they can be evicted from memory while under memory pressure. Persistent messages are also kept in memory when possible and only evicted from memory under memory pressure. The "persistence layer" refers to the mechanism used to store messages of both types to disk.
On this page we say "queue" to refer to a non-replicated queue or a queue leader or a queue mirror. Queue mirroring is a "layer above" persistence.
The persistence layer has two components: the queue index and the message store. The queue index is responsible for maintaining knowledge about where a given message is in a queue, along with whether it has been delivered and acknowledged. There is therefore one queue index per queue.
The message store is a key-value store for messages, shared among all queues in the server. Messages (the body, and any metadata fields: properties and/or headers) can either be stored directly in the queue index, or written to the message store. There are technically two message stores (one for transient and one for persistent messages) but they are usually considered together as "the message store".
Under memory pressure, the persistence layer tries to write as much out to disk as possible, and remove as much as possible from memory. There are some things however which must remain in memory:
Since RabbitMQ 3.10.0, the broker has a new implementation of classic queues, named version 2. Version 2 queues have a new index file format and implementation as well as a new per-queue storage file format to replace the embedding of messages directly in the index.
The main improvement from version 2 is improved stability while under high memory pressure.
In RabbitMQ 3.10.0 version 1 remains the default. It is possible to switch back and forth between version 1 and version 2.
The version can be changed using the queue-version
policy. When setting a new version via policy the queue will immediately convert its data on disk. It is possible to upgrade to version 2 or downgrade to version 1. Note that for large queues the conversion may take some time and results in the queue being unavailable while the conversion is running.
The default version can be set through configuration by setting classic_queue.default_version
in rabbitmq.conf.
There are advantages and disadvantages to writing messages to the queue index.
This feature has advantages and disadvantages. Main advantages are:
Disadvantages are:
The intent is for very small messages to be stored in the queue index as an optimisation, and for all other messages to be written to the message store. This is controlled by the configuration item queue_index_embed_msgs_below
. By default, messages with a serialised size of less than 4096 bytes (including properties and headers) are stored in the queue index.
Each queue index needs to keep at least one segment file in memory when reading messages from disk. The segment file contains records for 16,384 messages. Therefore be cautious if increasing queue_index_embed_msgs_below
; a small increase can lead to a large amount of memory used.
It is possible for persistence to underperform because the persister is limited in the number of file handles or async threads it has to work with. In both cases this can happen when you have a large number of queues which need to access the disk simultaneously.
The RabbitMQ server is limited in the number of file handles it can open. Every running network connection requires one file handle, and the rest are available for queues to use. If there are more disk-accessing queues than file handles after network connections have been taken into account, then the disk-accessing queues will share the file handles among themselves; each gets to use a file handle for a while before it is taken back and given to another queue.
This prevents the server from crashing due to there being too many disk-accessing queues, but it can become expensive. The management plugin can show I/O statistics for each node in the cluster; as well as showing rates of reads, writes, seeks and so on it will also show a rate of file handle churn — the rate at which file handles are recycled in this way. A busy server with too few file handles might be doing hundreds of reopens per second - in which case its performance is likely to increase notably if given more file handles.
The runtime uses a pool threads to handle long-running file I/O operations. These are shared among all virtual hosts and queues. Every active file I/O operation uses one async thread while it is occurring. Having too few async threads can therefore hurt performance.
Note that the situation with async threads is not exactly analogous to the situation with file handles. If a queue executes a number of I/O operations in sequence it will perform best if it holds onto a file handle for all the operations; otherwise we may flush and seek too much and use additional CPU orchestrating it. However, queues do not benefit from holding an async thread across a sequence of operations (in fact they cannot do so).
Therefore there should ideally be enough file handles for all the queues that are executing streams of I/O operations, and enough async threads for the number of simultaneous I/O operations your storage layer can plausibly execute.
It's less obvious when a lack of async threads is causing performance problems. (It's also less likely in general; check for other things first!) Typical symptoms of too few async threads include the number of I/O operations per second dropping to zero (as reported by the management plugin) for brief periods when the server should be busy with persistence, while the reported time per I/O operation increases.
The number of async threads is configured by the +A
runtime flag. It is likely to be a good idea to experiment with several different values before changing this.
As mentioned above, each message which is written to the message store uses a small amount of memory for its index entry. The message store index is pluggable in RabbitMQ, and other implementations are available as plugins which can remove this limitation.
The reason they are not shipped with the RabbitMQ distribution is that they all use native code. Note that such plugins typically make the message store run more slowly.