To use partitioned regions, you should understand how they work and your options for managing them.

During operation, a partitioned region looks like one large virtual region, with the same logical view held by all of the members where the region is defined. Logical View compared to Physical view. Logical view has X, Y, and Z in Partitioned Region A. Physical view has X in Partitioned Region A on Machine 1, Y in Partitioned Region A on Machine 2, and Z in Partitioned Region A on Machine 3.

For each member where you define the region, you can choose how much space to allow for region data storage, including no local storage at all. The member can access all region data regardless of how much is stored locally. Logical View compared to Physical view. Logical view has X, Y, and Z in Partitioned Region A. Physical view has no local data in Partitioned Region A on Machine 1, X and Z in Partitioned Region A on Machine 2, and Y in Partitioned Region A on Machine 3.

A cluster can have multiple partitioned regions, and it can mix partitioned regions with distributed regions and local regions. The usual requirement for unique region names, except for regions with local scope, still applies. A single member can host multiple partitioned regions.

Data Partitioning

Tanzu GemFire automatically determines the physical location of data in the members that host a partitioned region’s data. Tanzu GemFire breaks partitioned region data into units of storage known as buckets and stores each bucket in a region host member. Buckets are distributed in accordance to the member’s region attribute settings.

When an entry is created, it is assigned to a bucket. Keys are grouped together in a bucket and always remain there. If the configuration allows, the buckets may be moved between members to balance the load.

You must run the data stores needed to accommodate storage for the partitioned region’s buckets. You can start new data stores on the fly. When a new data store creates the region, it takes responsibility for as many buckets as allowed by the partitioned region and member configuration.

You can customize how Tanzu GemFire groups your partitioned region data with custom partitioning and data colocation.

Partitioned Region Operation

A partitioned region operates much like a non-partitioned region with distributed scope. Most of the standard Region methods are available, although some methods that are normally local operations become distributed operations, because they work on the partitioned region as a whole instead of the local cache. For example, a put or create into a partitioned region may not actually be stored into the cache of the member that called the operation. The retrieval of any entry requires no more than one hop between members.

Partitioned regions support the client/server model, just like other regions. If you need to connect dozens of clients to a single partitioned region, using servers greatly improves performance.

Additional Information About Partitioned Regions

Keep the following in mind about partitioned regions:

  • Partitioned regions never run asynchronously. Operations in partitioned regions always wait for acknowledgement from the caches containing the original data entry and any redundant copies.
  • A partitioned region needs a cache loader in every region data store (local-max-memory > 0).
  • Tanzu GemFire distributes the data buckets as evenly as possible across all members storing the partitioned region data, within the limits of any custom partitioning or data colocation that you use. The number of buckets allotted for the partitioned region determines the granularity of data storage and thus how evenly the data can be distributed. The number of buckets is a total for the entire region across the cluster.
  • In rebalancing data for the region, Tanzu GemFire moves buckets, but does not move data around inside the buckets.
  • You can query partitioned regions, but there are certain limitations. See Querying Partitioned Regions for more information.
check-circle-line exclamation-circle-line close-line
Scroll to top icon