This topic explains how eviction works in VMware Tanzu GemFire.
Eviction keeps a region’s resource use under a specified level by removing least recently used (LRU) entries to make way for new entries. You can choose whether expired entries are overflowed to disk or destroyed. See Persistence and Overflow.
Eviction is triggered when a size-based threshold is exceeded. A region’s eviction threshold can be based on:
These eviction algorithms are mutually exclusive; only one can be in effect for a given region.
When Tanzu GemFire determines that adding or updating an entry would take the region over the specified level, it overflows or removes enough older entries to make room. For entry count eviction, this means a one-to-one trade of an older entry for the newer one. For the memory settings, the number of older entries that need to be removed to make space depends on the sizes of the older and newer entries.
For efficiency, the selection of items for removal is not strictly LRU, but does choose eviction candidates from among the region’s oldest entries. As a result, eviction may leave older entries for the region in the local data store.
VMware Tanzu GemFire provides the following eviction actions:
local destroy - Removes the entry from the local cache, but does not distribute the removal operation to remote members. This action can be applied to an entry in a partitioned region, but is not recommended if redundancy is enabled (redundant-copies > 0), as it introduces inconsistencies between the redundant buckets. When applied to an entry in a replicated region, Tanzu GemFire silently changes the region type to “preloaded” to accommodate the local modification.
overflow to disk - The entry’s value is overflowed to disk and set to null in memory. The entry’s key is retained in the cache. This is the only eviction action fully supported for partitioned regions.
In partitioned regions, Tanzu GemFire removes the oldest entry it can find in the bucket where the new entry operation is being performed. Tanzu GemFire maintains LRU entry information on a bucket-by-bucket basis, as the cost of maintaining information across the partitioned region would slow the system’s performance.