When you determine buffer size settings, you must strike a balance between communication needs and other processing.
Larger socket buffers allow your members to distribute data and events more quickly, but they also take memory away from other things. If you store very large data objects in your cache, finding the right sizing for your buffers while leaving enough memory for the cached data can become critical to system performance.
Ideally, you should have buffers large enough for the distribution of any single data object so you don’t get message fragmentation, which lowers performance. Your buffers should be at least as large as your largest stored objects and their keys plus some overhead for message headers. The overhead varies depending on who is sending and receiving, but 100 bytes should be sufficient. You can also look at the statistics for the communication between your processes to see how many bytes are being sent and received.
If you see performance problems and logging messages indicating blocked writers, increasing your buffer sizes may help.
This table lists the settings for the various member relationships and protocols, and tells where to set them.
Protocol / Area Affected | Configuration Location | Property Name |
---|---|---|
TCP / IP | --- | --- |
Peer-to-peer send/receive | gemfire.properties |
socket-buffer-size |
Client send/receive | cache.xml <pool> |
socket-buffer-size |
Server send/receive | gfsh start server or cache.xml <CacheServer> |
socket-buffer-size |
UDP Multicast | --- | --- |
Peer-to-peer send | gemfire.properties | mcast-send-buffer-size |
Peer-to-peer receive | gemfire.properties | mcast-recv-buffer-size |
UDP Unicast | --- | --- |
Peer-to-peer send | gemfire.properties | udp-send-buffer-size |
Peer-to-peer receive | gemfire.properties | udp-recv-buffer-size |
TCP/IP Buffer Sizes
If possible, your TCP/IP buffer size settings should match across your Tanzu GemFire installation. At a minimum, follow the guidelines listed here.
gemfire.properties
should be the same throughout your cluster.Client/server. The client’s pool socket-buffer size-should match the setting for the servers the pool uses, as in these example cache.xml
snippets:
Client Socket Buffer Size cache.xml Configuration:
<pool>name="PoolA" server-group="dataSetA" socket-buffer-size="42000"...
Server Socket Buffer Size cache.xml Configuration:
<cache-server port="40404" socket-buffer-size="42000">
<group>dataSetA</group>
</cache-server>
UDP Multicast and Unicast Buffer Sizes
With UDP communication, one receiver can have many senders sending to it at once. To accommodate all of the transmissions, the receiving buffer should be larger than the sum of the sending buffers. If you have a system with at most five members running at any time, in which all members update their data regions, you would set the receiving buffer to at least five times the size of the sending buffer. If you have a system with producer and consumer members, where only two producer members ever run at once, the receiving buffer sizes should be set at over two times the sending buffer sizes, as shown in this example:
mcast-send-buffer-size=42000
mcast-recv-buffer-size=90000
udp-send-buffer-size=42000
udp-recv-buffer-size=90000
Operating System Limits
Your operating system sets limits on the buffer sizes it allows. If you request a size larger than the allowed, you may get warnings or exceptions about the setting during startup. These are two examples of the type of message you may see:
[warning 2008/06/24 16:32:20.286 PDT CacheRunner <main> tid=0x1]
requested multicast send buffer size of 9999999 but got 262144: see
system administration guide for how to adjust your OS
Exception in thread "main" java.lang.IllegalArgumentException: Could not
set "socket-buffer-size" to "99262144" because its value can not be
greater than "20000000".
If you think you are requesting more space for your buffer sizes than your system allows, check with your system administrator about adjusting the operating system limits.