This topic describes the create
command in gfsh
, the VMware Tanzu GemFire command-line interface.
Use this command to create async-event-queues, disk-stores, gateway receivers, gateway senders, indexes, and regions.
Creates an asynchronous event queue for batching events before they are delivered by a gateway sender.
Creates all the defined indexes.
Defines a pool of one or more disk stores, which can be used by regions and client subscription queues, and gateway sender queues for WAN distribution.
Creates a gateway receiver. You can only have one gateway receiver on each member, and unlike a gateway sender, you do not need to specify an identifier for the gateway receiver .
Creates a gateway sender on one or more members of a cluster.
Create an index that can be used when executing queries.
Create a JNDI binding that specifies resource attributes which describe a JDBC connection.
Create a region with given path and configuration.
Create a region with given path and configuration.
Note: The order in which components are created matters. For example, the recommendation for WAN setup is:
This assures that when WAN receivers are started, their associated regions are in place. Otherwise, the create region
command may fail if events are received before the region exists. For more on this topic, see Configuring a Multi-site (WAN) System.
Creates an asynchronous event queue for batching events before they are delivered by a gateway sender.
See Configuring Multi-Site (WAN) Event Queues.
Availability: Online. You must be connected in gfsh
to a JMX Manager member to use this command.
Syntax:
create async-event-queue --id=value --listener=value [--groups=value(,value)*]
[--parallel(=value)?] [--enable-batch-conflation(=value)?] [--batch-size=value]
[--batch-time-interval=value] [--persistent(=value)?] [--disk-store=value]
[--disk-synchronous(=value)?] [--max-queue-memory=value]
[--dispatcher-threads=value] [--order-policy=value]
[--gateway-event-filter=value(,value)*]
[--gateway-event-substitution-filter=value]
[--listener-param=value(,value)*] [--forward-expiration-destroy(=value)?]
[--pause-event-processing(=value)?]
Parameters, create async-event-queue:
Name | Description | Default Value |
---|---|---|
‑‑id | Required. ID of the asynchronous event queue | |
‑‑groups | The queue is created on all members of the groups. If you do not specify a group, the queue is created on all members. | |
‑‑parallel | Specifies whether the queue is parallel. | false |
‑‑enable-batch-conflation | Enables batch conflation. | false |
‑‑batch-size | Maximum number of messages that a batch can contain. | 100 |
‑‑batch-time-interval | Maximum amount of time, in ms, that can elapse before a batch is delivered, when no events are found in the queue to reach the batch-size. | 5 |
‑‑persistent | Boolean value that determines whether Tanzu GemFire persists this queue. | false If specified with out a value, default is true. |
‑‑disk-store | Named disk store to use for storing queue overflow, or for persisting the queue. If you specify a value, the named disk store must exist. If you specify a null value, Tanzu GemFire uses the default disk store for overflow and queue persistence. | |
‑‑disk-synchronous | Specifies whether disk writes are synchronous. | true |
‑‑max-queue-memory | Maximum amount of memory in megabytes that the queue can consume before overflowing to disk. | 100 |
‑‑dispatcher-threads | Number of threads used for sending events. | 5 |
‑‑order-policy | Policy for dispatching events when ‑‑dispatcher-threads is > 1. Possible values are THREAD , KEY , PARTITION . |
KEY |
‑‑gateway-event-filter | List of fully qualified class names of GatewayEventFilters for this queue. These classes filter events before dispatching to remote servers. | |
‑‑gateway-event-substitution-filter | Fully-qualified class name of the GatewayEventSubstitutionFilter for this queue. |
|
‑‑listener | Required. Fully-qualified class name of Async Event Listener for this queue | |
‑‑listener-param | Parameter name and value to be passed to the Async Event Listener class. Optionally, you can specify a value by following the parameter name with the # character and the value. For example:
|
|
‑‑forward-expiration-destroy | Enables forwarding of expiration destroy operations to AsyncEventListener instances. If specified without a value, this parameter is set to "false". | false |
‑‑pause-event-processing | Specifies whether event dispatching from the queue to the listeners will be paused when the AsyncEventQueue is started. If specified without a value, this parameter is set to "true". | false |
Example Commands:
create async-event-queue --id=myAEQ --listener=myApp.myListener
Creates all the defined indexes.
See also define index and clear defined indexes.
Availability: Online. You must be connected in gfsh to a JMX Manager member to use this command.
Syntax:
create defined indexes [--members=value(,value)*] [--groups=value(,value)*]
Parameters, create defined indexes:
Name | Description | Default |
---|---|---|
‑‑members | Name/Id of the members on which index will be created. | |
‑‑groups | The index will be created on all the members in the member groups. |
Example Commands:
create defined indexes
Sample Output:
gfsh>create defined indexes
Indexes successfully created. Use list indexes to get details.
1. ubuntu(server1:17682)<v1>:27574
If index creation fails, you may receive an error message in gfsh similar to the following:
gfsh>create defined indexes
Exception : org.apache.geode.cache.query.RegionNotFoundException ,
Message : Region ' /r3' not found: from /r3Occurred on following members
1. india(s1:17866)<v1>:27809
Defines a pool of one or more disk stores, which can be used by regions and client subscription queues, and gateway sender queues for WAN distribution.
See Disk Storage
Availability: Online. You must be connected in gfsh to a JMX Manager member to use this command.
Syntax:
create disk-store --name=value --dir=value(,value)* [--allow-force-compaction(=value)?]
[--auto-compact(=value)?] [--compaction-threshold=value] [--max-oplog-size=value]
[--queue-size=value] [--time-interval=value] [--write-buffer-size=value]
[--groups=value(,value)*]
[--disk-usage-warning-percentage=value] [--disk-usage-critical-percentage=value]
Parameters, create disk-store:
Name | Description | Default Value |
---|---|---|
‑‑name | Required. The name of this disk store. | |
‑‑dir | Required. One or more directory names where the disk store files are written. Optionally, directory names may be followed by # and the maximum number of megabytes that the disk store can use in the directory. For example: If the specified directory does not exist, the command will create the directory for you. |
If the maximum directory size in megabytes is not specified, it will be set to 2147483647 (the value of Integer.MAX_VALUE ) |
‑‑allow-force-compaction | Set to true to allow disk compaction to be forced on this disk store. | false |
‑‑auto-compact | Set to true to automatically compact the disk files. | true |
‑‑compaction-threshold | Percentage of non-garbage remaining, below which the disk store is eligible for compaction. | 50 |
‑‑max-oplog-size | Maximum size, in megabytes, for an oplog file. When the oplog file reaches this size, the file is rolled over to a new file. | 1024 |
‑‑queue-size | Maximum number of operations that can be asynchronously queued to be written to disk. | 0 |
‑‑time-interval | The number of milliseconds that can elapse before unwritten data is written to disk. | 1000 |
--groups | The disk store is created on all members of the groups. If no group is specified, the disk store is created on all members. | |
‑‑write-buffer-size | The size in bytes of the write buffer that this disk store uses when writing data to disk. Larger values may increase performance but use more memory. The disk store allocates one direct memory buffer of this size. | 32768 |
‑‑disk-usage-warning-percentage | Disk usage above this threshold generates a warning message. For example, if the threshold is set to 90%, then on a 1 TB drive falling under 100 GB of free disk space generates the warning. Set to "0" (zero) to deactivate. |
90 |
‑‑disk-usage-critical-percentage | Disk usage above this threshold generates an error message and shuts down the member's cache. For example, if the threshold is set to 99%, then falling under 10 GB of free disk space on a 1 TB drive generates the error and shuts down the cache. Set to "0" (zero) to deactivate. |
99 |
Example Commands:
create disk-store --name-store1 --dir=/data/ds1
Sample Output:
gfsh>create disk-store --name-store1 --dir=/data/ds1
Member | Result
------- | -------
server1 | Success
Creates gateway receivers. You can only have one gateway receiver on each member, and unlike a gateway sender, you do not need to specify an identifier for the gateway receiver.
The create occurs on all servers, unless the --groups
or --members
option is specified.
If the gateway receiver creation succeeds on at least one member, this gfsh
command exits with an exit code indicating success.
Outputs a tabular format status of each member’s gateway receiver, independent of the success or failure of the creation.
See Gateway Receivers.
Availability: Online. You must be connected in gfsh
to a JMX Manager member to use this command.
Syntax:
create gateway-receiver [--groups=value(,value)*] [--members=value(,value)*]
[--manual-start=(value)?] [--start-port=value] [--end-port=value] [--bind-address=value]
[--maximum-time-between-pings=value] [--socket-buffer-size=value]
[--gateway-transport-filter=value(,value)*] [--hostname-for-senders=value]
[--if-not-exists=(value)?]
Parameters, create gateway-receiver:
Name | Description | Default Value |
---|---|---|
‑‑groups | Gateway receivers are created on the members of the groups. | |
‑‑members | Name of the members on which to create the gateway receiver. For backward compatibility, no gateway receiver configuration is persisted if this option is specified and cluster configuration is enabled. | |
‑‑manual-start | Boolean value that specifies whether you need to manually start the gateway receiver. When specified without providing a boolean value or when specified and set to "true", the gateway receiver must be started manually. | false |
‑‑start-port | Starting port number to use when specifying the range of possible port numbers this gateway receiver will use to connects to gateway senders in other sites. Tanzu GemFire chooses an unused port number in the specified port number range to start the receiver. If no port numbers in the range are available, an exception is thrown. The |
5000 |
‑‑end-port | Defines the upper bound port number to use when specifying the range of possible port numbers this gateway receiver will use to for connections from gateway senders in other sites. Tanzu GemFire chooses an unused port number in the specified port number range to start the receiver. If no port numbers in the range are available, an exception is thrown. The |
5500 |
‑‑bind-address | Network address for connections from gateway senders in other sites. Specify the address as a literal string value. | |
‑‑socket-buffer-size | An integer value that sets the buffer size (in bytes) of the socket connection for this gateway receiver. This value should match the socket-buffer-size setting of gateway senders that connect to this receiver. |
524288 |
‑‑gateway-transport-filter | The fully qualified class name of the GatewayTransportFilter to be added to the Gateway receiver. | |
‑‑maximum-time-between-pings | Integer value that specifies the time interval (in milliseconds) to use between pings to connected WAN sites. This value determines the maximum amount of time that can elapse before a remote WAN site is considered offline. | 60000 |
‑‑hostname-for-senders | The host name or IP address told to gateway senders as the address for them to connect to. The locator informs gateway senders of this value. | |
‑‑if-not-exists | When specified without providing a boolean value or when specified and set to "true", gateway receivers will not be created if they already exist. Command output reports the status of each creation attempt. | false |
Example Commands:
gfsh>create gateway-receiver --members=server1
Sample Output:
gfsh>create gateway-receiver --members=server1
Member | Status
------- | ---------------------------------------------------------------------------
server1 | GatewayReceiver created on member "server1" and will listen on the port "0"
Creates a gateway sender on one or more members of a cluster.
See Gateway Senders.
Note: The gateway sender configuration for a specific sender id
must be identical on each Tanzu GemFire member that hosts the gateway sender.
Availability: Online. You must be connected in gfsh
to a JMX Manager member to use this command.
Syntax:
create gateway-sender --id=value --remote-distributed-system-id=value
[--groups=value(,value)*] [--members=value(,value)*] [--parallel(=value)?]
[--manual-start=value] [--socket-buffer-size=value] [--socket-read-timeout=value]
[--enable-batch-conflation=value] [--batch-size=value] [--batch-time-interval=value]
[--enable-persistence=value] [--disk-store-name=value] [--disk-synchronous=value]
[--maximum-queue-memory=value] [--alert-threshold=value] [--dispatcher-threads=value]
[--order-policy=value][--gateway-event-filter=value(,value)*]
[--gateway-transport-filter=value(,value)*]
[--group-transaction-events(=value)?]
Parameters, create gateway-sender:
Name | Description | Default |
---|---|---|
‑‑id | Required. Unique identifier for the gateway sender, usually an identifier associated with a physical location. | |
‑‑remote-distributed-system-id | Required. ID of the remote cluster where this gateway sender sends events. | |
‑‑groups | Gateway senders are created on the members of the groups. | |
‑‑members | Name of the members on which to create the gateway sender. | |
‑‑parallel | When set to true, specifies a parallel Gateway Sender. | false |
‑‑enable-batch-conflation | Boolean value that determines whether Tanzu GemFire should conflate messages. | false |
‑‑manual-start | Deprecated. Boolean value that specifies whether you need to manually start the gateway sender. If you supply a null value, the default value of false is used, and the gateway sender starts automatically. A manual start is likely to cause data loss, so manual start should never be used in a production system. | false |
‑‑socket-buffer-size | Size of the socket buffer that sends messages to remote sites. This size should match the size of the socket-buffer-size attribute of remote gateway receivers that process region events. |
524288 |
‑‑socket-read-timeout | Amount of time in milliseconds that the gateway sender will wait to receive an acknowledgment from a remote site. By default this is set to 0, which means there is no timeout. If you do set this timeout, you must set it to a minimum of 30000 (milliseconds). Setting it to a lower number will generate an error message and reset the value to the default of 0. | 0 |
‑‑batch-size | Maximum number of messages that a batch can contain. | 100 |
‑‑batch-time-interval | Maximum amount of time, in ms, that can elapse before a batch is delivered, when no events are found in the queue to reach the batch-size. | 1000 |
‑‑enable-persistence | Boolean value that determines whether Tanzu GemFire persists the gateway queue. | false |
‑‑disk-store-name | Named disk store to use for storing the queue overflow, or for persisting the queue. If you specify a value, the named disk store must exist. If you specify a null value, Tanzu GemFire uses the default disk store for overflow and queue persistence. | |
‑‑disk-synchronous | For regions that write to disk, boolean that specifies whether disk writes are done synchronously for the region. | true |
‑‑maximum-queue-memory | Maximum amount of memory in megabytes that the queue can consume before overflowing to disk. | 100 MB |
‑‑alert-threshold | Maximum number of milliseconds that a region event can remain in the gateway sender queue before Tanzu GemFire logs an alert. | 0 |
‑‑dispatcher-threads | Number of dispatcher threads that are used to process region events from a gateway sender queue or asynchronous event queue. | 5 |
‑‑order-policy | When the dispatcher-threads attribute is greater than 1, order-policy configures the way in which multiple dispatcher threads process region events from a serial gateway queue or serial asynchronous event queue. This attribute can have one of the following values:
You cannot configure the |
key |
‑‑gateway-event-filter | A list of fully-qualified class names of GatewayEventFilters (separated by commas) to be associated with the GatewaySender. This serves as a callback for users to filter out events before dispatching to a remote cluster. For example:
|
|
‑‑gateway-transport-filter | The fully-qualified class name of the GatewayTransportFilter to be added to the GatewaySender. | |
‑‑group-transaction-events | Boolean value to ensure that all the events of a transaction are sent in the same batch, i.e., they are never spread across different batches. Only allowed to be set on gateway senders with the Note: In order to work for a transaction, the regions to which the transaction events belong must be replicated by the same set of senders with this flag enabled. Note: If the above condition is not fulfilled or under very high load traffic conditions, it may not be guaranteed that all the events for a transaction will be sent in the same batch, even if |
false |
‑‑enforce-threads-connect-same-receiver | This parameter applies only to serial gateway senders. If true, receiver member id is checked by all dispatcher threads when the connection is established to ensure they connect to the same receiver. Instead of starting all dispatcher threads in parallel, one thread is started first, and after that the rest are started in parallel. | false |
Example Commands:
gfsh>create gateway-sender --remote-distributed-system-id="2" --id="sender2"
Sample Output:
gfsh>create gateway-sender --remote-distributed-system-id="2" --id="sender2"
Member | Status
------- | --------------------------------------------
server1 | GatewaySender "sender2" created on "server1"
Create an index that can be used when executing queries.
Availability: Online. You must be connected in gfsh
to a JMX Manager member to use this command.
See Working with Indexes.
Syntax:
create index --name=value --expression=value --region=value
[--members=value(,value)*] [--type=value] [--groups=value(,value)*]
Parameters, create index:
Name | Description | Default |
---|---|---|
‑‑name | Required. Name of the index to create. | |
‑‑expression | Required. Field of the region values that are referenced by the index. | |
‑‑region | Required. Name/Path of the region which corresponds to the “from” clause in a query. | |
‑‑members | Name/ID of the members on which index will be created. | |
‑‑type | Type of the index. Valid values are: range and key . (A third type, hash , is still recognized but hash indexes are deprecated.) |
range |
‑‑groups | The index will be created on all the members in the groups. |
Example Commands:
create index --name=myKeyIndex --expression=region1.Id --region=region1 --type=key
Sample Output:
gfsh>create index --name=myKeyIdex --expression=region1.Id --region=region1 --type=key
Index successfully created with following details
Name : myKeyIdex
Expression : region1.Id
RegionPath : /region1
Members which contain the index
1. ubuntu(server1:17682)<v1>:27574
gfsh>create index --name=myIndex2 --expression=exp2 --region=/exampleRegion
Failed to create index "myIndex2" due to following reasons
Index "myIndex2" already exists. Create failed due to duplicate name.
Occurred on following members
1. ubuntu(server1:17682)<v1>:27574
Create a JNDI binding that specifies resource attributes which describe a JDBC connection.
Availability: Online. You must be connected in gfsh to a JMX Manager member to use this command.
Syntax:
create jndi-binding --name=value --url=value
[--jdbc-driver-class=value] [--type=value] [--blocking-timeout-seconds=value]
[--conn-pooled-datasource-class=value] [--idle-timeout-seconds=value]
[--init-pool-size=value] [--login-timeout-seconds=value]
[--managed-conn-factory-class=value] [--max-pool-size=value] [--password=value]
[--transaction-type=value] [--username=value] [--xa-datasource-class=value]
[--if-not-exists(=value)?] [--datasource-config-properties=value(,value)*]
Parameters, create jndi-binding:
Name | Description | Default |
---|---|---|
‑‑name | Required. Name of the binding to create. | |
‑‑url or ‑‑connection-url | Required. the JDBC driver connection URL string. For example, jdbc:hsqldb:hsql://localhost:1701 . |
|
‑‑jdbc-driver-class | The fully qualified name of the JDBC driver class. | |
‑‑type | Type of the XA datasource. One of: MANAGED , SIMPLE , POOLED , or XAPOOLED . If --type=POOLED and a --conn-pooled-datasource-class option is not specified, a pool will be created using Hikari. For more information about Hikari, see https://brettwooldridge.github.io/HikariCP. |
SIMPLE |
‑‑blocking-timeout-seconds | Specifies the maximum time, in seconds, to block while waiting for a connection before throwing an exception. | |
‑‑conn-pooled-datasource-class | The fully qualified name of the connection pool implementation that holds XA datasource connections. If --type=POOLED , then this class must implement org.apache.geode.datasource.PooledDataSourceFactory . |
|
‑‑idle-timeout-seconds | Specifies the time, in seconds, that a connection may be idle before being closed. | |
‑‑init-pool-size | Specifies the initial number of connections the pool should hold. | |
‑‑login-timeout-seconds | The quantity of seconds after which the client thread will be disconnected due to inactivity. | |
‑‑managed-conn-factory-class | The fully qualified name of the connection factory implementation. | |
‑‑max-pool-size | The maximum number of connections that may be created in a pool. | |
‑‑password | The default password used when creating a new connection. | |
‑‑transaction-type | Type of the transaction. One of XATransaction , NoTransaction , or LocalTransaction . |
|
‑‑username | Specifies the user name to be used when creating a new connection. When specified, if the --password option is not also specified, gfsh will prompt for the password. |
|
‑‑xa-datasource-class | The fully qualified name of the javax.sql.XADataSource implementation class. |
|
‑‑if-not-exists | When true, a duplicate jndi binding will not be created if one with the same name already exists. When false, an attempt to create a duplicate jndi binding results in an error. The option is set to true if the option is specified without a value. | false |
‑‑datasource-config-properties | Properties for the custom XADataSource driver. Append a JSON string containing a (name, type, value) tuple to set any property. If --type=POOLED , the properties will configure the database data source. If --type=POOLED and the value of a name within the tuple begins with the string “pool.”, then the properties will configure the pool data source. For example: --datasource-config-properties={'name':'name1','type':'type1','value':'value1'},{'name':'pool.name2','type':'type2','value':'value2'} |
Example Commands:
gfsh>create jndi-binding --name=jndi1 --type=SIMPLE \
--jdbc-driver-class=org.apache.derby.jdbc.EmbeddedDriver \
--url="jdbc:derby:newDB;create=true"
Create a region with given path and configuration.
You must specify either a --type
or a --template-region
for initial configuration when creating a region. Specifying a --key-constraint
and --value-constraint
makes object type information available during querying and indexing.
See Region Data Storage and Distribution.
See Specifying JSON within Command-Line Options for syntax details.
Availability: Online. You must be connected in gfsh
to a JMX Manager member to use this command.
Syntax:
create region --name=value [--type=value] [--template-region=value]
[--groups=value(,value)*] [--if-not-exists(=value)?]
[--key-constraint=value] [--value-constraint=value]
[--enable-statistics=value] [--entry-idle-time-expiration=value]
[--entry-idle-time-expiration-action=value]
[--entry-time-to-live-expiration=value]
[--entry-time-to-live-expiration-action=value]
[--entry-idle-time-custom-expiry=value] [--entry-time-to-live-custom-expiry=value]
[--region-idle-time-expiration=value]
[--region-idle-time-expiration-action=value]
[--region-time-to-live-expiration=value]
[--region-time-to-live-expiration-action=value] [--disk-store=value]
[--enable-synchronous-disk=value] [--enable-async-conflation=value]
[--enable-subscription-conflation=value] [--cache-listener=value(,value)*]
[--cache-loader=value] [--cache-writer=value]
[--async-event-queue-id=value(,value)*]
[--gateway-sender-id=value(,value)*] [--enable-concurrency-checks=value]
[--enable-cloning=value] [--concurrency-level=value]
[--colocated-with=value] [--local-max-memory=value]
[--recovery-delay=value] [--redundant-copies=value]
[--startup-recovery-delay=value] [--total-max-memory=value]
[--total-num-buckets=value] [--compressor=value] [--off-heap(=value)?]
[--partition-listener=value(,value)*] [--partition-resolver=value]
[--eviction-entry-count=value] [--scope=value]
[--eviction-max-memory=value] [--eviction-action=value]
[--eviction-object-sizer=value]
Parameters, create region:
Name | Description | Default | ||||
---|---|---|---|---|---|---|
‑‑name | Required. Name/Path of the region to be created. | |||||
‑‑type | Required (if template-region is not specified.) Type of region to create. Options include: PARTITION, PARTITION_REDUNDANT, REPLICATE, LOCAL, etc. To get a list of of all region type options, add the ‑‑type parameter and then select the TAB key to display a full list. |
|||||
‑‑template-region | Required (if type is not specified.) Name/Path of the region whose attributes should be duplicated when creating this region. | |||||
‑‑groups | Groups of members on which the region will be created. | |||||
‑‑if-not-exists | A new region will not be created if a region with the same name already exists. By default, an attempt to create a duplicate region is reported as an error. If this option is specified without a value or is specified with a value of true , then gfsh displays a "Skipping..." acknowledgement, but does not throw an error. |
false | ||||
‑‑key-constraint | Fully qualified class name of the objects allowed as region keys. Ensures that keys for region entries are all of the same class. | |||||
‑‑value-constraint | Fully qualified class name of the objects allowed as region values. If not specified, then region values can be of any class. | |||||
‑‑enable-statistics | Whether to gather statistics for the region. Must be true to use expiration on the region. | |||||
‑‑entry-idle-time-expiration | How long, in seconds, the region's entries can remain in the cache without being accessed. | no expiration | ||||
‑‑entry-idle-time-expiration-action | Action to be taken on an entry that has exceeded the idle expiration. Valid expiration actions include destroy, local-destroy, invalidate (default), local-invalidate. | |||||
‑‑entry-time-to-live-expiration | How long, in seconds, the region's entries can remain in the cache without being accessed or updated. The default is no expiration of this type. | no expiration | ||||
‑‑entry-time-to-live-expiration-action | Action to be taken on an entry that has exceeded the TTL expiration. Valid expiration actions include destroy, local-destroy, invalidate (default), local-invalidate. | |||||
‑‑entry-idle-time-custom-expiry | The name of a class implementing CustomExpiry for entry idle time. Append a JSON string for initialization properties. | |||||
‑‑entry-time-to-live-custom-expiry | The name of a class implementing CustomExpiry for entry time to live. Append a JSON string for initialization properties. | |||||
‑‑region-idle-time-expiration | How long, in seconds, the region can remain in the cache without its entries being accessed. The default is no expiration of this type. | |||||
‑‑region-idle-time-expiration-action | Action to be taken on a region that has exceeded the idle expiration. Valid expiration actions include destroy, local-destroy, invalidate (default), local-invalidate. The destroy and local-destroy actions destroy the region. The invalidate and local-invalidate actions leave the region in place, but invalidate all of its entries. | |||||
‑‑region-time-to-live-expiration | How long, in seconds, the region can remain in the cache without its entries being accessed or updated. The default is no expiration of this type. | no expiration | ||||
‑‑region-time-to-live-expiration-action | Action to be taken on a region that has exceeded the TTL expiration. Valid expiration actions include destroy, local-destroy, invalidate (default), local-invalidate. The destroy and local-destroy actions destroy the region. The invalidate and local-invalidate actions leave the region in place, but invalidate all of its entries. | |||||
‑‑disk-store | Disk Store to be used by this region. The list disk-stores command can be used to display existing disk stores. | |||||
‑‑enable-synchronous-disk | Whether writes are done synchronously for regions that persist data to disk. | |||||
‑‑enable-async-conflation | Whether to allow aggregation of asynchronous TCP/IP messages sent by the producer member of the region. A false value causes all asynchronous messages to be sent individually. | |||||
‑‑enable-subscription-conflation | Whether the server should conflate its messages to the client. A false value causes all server-client messages to be sent individually. | |||||
‑‑cache-listener | Fully qualified class name of a plug-in to be instantiated for receiving after-event notification of changes to the region and its entries. Any number of cache listeners can be configured. A fully qualified class name may be appended with a JSON specification that will be parsed to become the fields of the parameter to the init() method for a class that implements the Declarable interface. |
|||||
‑‑cache-loader | Fully qualified class name of a plug-in to be instantiated for receiving notification of cache misses in the region. At most, one cache loader can be defined in each member for the region. For distributed regions, a cache loader may be invoked remotely from other members that have the region defined. A fully qualified class name may be appended with a JSON specification that will be parsed to become the fields of the parameter to the initialize() method for a class that implements the Declarable interface. |
|||||
‑‑cache-writer | Fully qualified class name of a plug-in to be instantiated for receiving before-event notification of changes to the region and its entries. The plug-in may cancel the event. At most, one cache writer can be defined in each member for the region. A fully qualified class name may be appended with a JSON specification that will be parsed to become the fields of the parameter to the init() method for a class that implements the Declarable interface. |
|||||
‑‑async-event-queue-id | IDs of the Async Event Queues that will be used for write-behind operations. | |||||
‑‑gateway-sender-id | IDs of the Gateway Senders to which data will be routed. | |||||
‑‑enable-concurrency-checks | Whether Region Version Vectors are implemented. Region Version Vectors are an extension to the versioning scheme that aid in synchronization of replicated regions. | |||||
‑‑enable-cloning | Determines how fromDelta applies deltas to the local cache for delta propagation. When true, the updates are applied to a clone of the value and then the clone is saved to the cache. When false, the value is modified in place in the cache. | |||||
‑‑concurrency-level | Estimate of the maximum number of application threads that will concurrently access a region entry at one time. This attribute does not apply to partitioned regions. | |||||
‑‑colocated-with | Central Region with which this region should be colocated. | |||||
‑‑local-max-memory | Maximum amount of memory, in megabytes, to be used by the region in this process. (The default is 90% of available heap.) | |||||
‑‑recovery-delay | Delay in milliseconds that existing members will wait after a member crashes before restoring this region's redundancy on the remaining members. The default value (-1) indicates that redundancy will not be recovered after a failure. | |||||
‑‑redundant-copies | Number of extra copies of buckets desired. Extra copies allow for both high availability in the face of VM departure (intended or unintended) and load balancing read operations. (Allowed values: 0, 1, 2 and 3) | |||||
‑‑startup-recovery-delay | Delay in milliseconds that new members will wait before assuming their share of cluster-level redundancy. This allows time for multiple regions to start before the redundancy workload is parceled out to the new members. A value of -1 indicates that adding new members will not trigger redundancy recovery. | The default is to recover redundancy immediately when a new member is added. | ||||
‑‑total-max-memory | Maximum amount of memory, in megabytes, to be used by the region in all processes. | |||||
‑‑total-num-buckets | Total number of hash buckets to be used by the region in all processes. | 113 | ||||
‑‑compressor | Java class name that implements compression for the region. You can write a custom compressor that implements org.apache.geode.compression.Compressor or you can specify the Snappy compressor (org.apache.geode.compression.SnappyCompressor ), which is bundled with Tanzu GemFire. See Region Compression. |
no compression | ||||
‑‑off-heap | Specifies whether the region values are stored in heap memory or off-heap memory. When true, region values are in off-heap memory. If the parameter is specified without a value, the value of true is used. | false | ||||
‑‑partition-listener | Specifies fully-qualified class names of one or more custom partition listeners. | |||||
‑‑partition-resolver | Specifies the full path to a custom partition resolver. Specify org.apache.geode.cache.util.StringPrefixPartitionResolver to use the included string prefix PartitionResolver . |
|||||
‑‑eviction-entry-count | Enables eviction, where the eviction policy is based on the number of entries in the region. | |||||
‑‑eviction-max-memory | Enables eviction, where the eviction policy is based on the amount of memory consumed by the region, specified in megabytes. | |||||
‑‑eviction-action | Action to take when the eviction threshold is reached.
|
|||||
‑‑eviction-object-sizer | Specifies your implementation of the ObjectSizer interface to measure the size of objects in the region. The sizer applies only to heap and memory based eviction. | |||||
‑‑scope | Specifies the scope of the Replicated region. This field can be only used if the --type parameter is set with a replicated region type. This parameter is invalid for all other region types. If this parameter is not set and --type is set to a replicated region, the the default scope DISTRIBUTED_ACK is set. |
Example Commands:
create region --name=region1 --type=REPLICATE_PERSISTENT \
--cache-writer=org.apache.geode.examples.MyCacheWriter \
--group=Group1 --disk-store=DiskStore1
create region --name=region12 --template-region=/region1
create region --name=region2 --type=REPLICATE \
--cache-listener=org.apache.geode.examples.MyCacheListener1,\
org.apache.geode.examples.MyCacheListener2 \
--group=Group1,Group2
create region --name=region3 --type=PARTITION_PERSISTENT --redundant-copies=2 \
--total-max-memory=1000 --startup-recovery-delay=5 --total-num-buckets=100 \
--disk-store=DiskStore2 --cache-listener=org.apache.geode.examples.MyCacheListener3 \
--group=Group2
create region --name=region4 --type=REPLICATE_PROXY \
--cache-listener=org.apache.geode.examples.MyCacheListener1 --group=Group1,Group2
create region --name=myRegion --type=REPLICATE --eviction-max-memory=100 \
--eviction-action=overflow-to-disk --eviction-object-sizer=my.company.geode.MySizer
create region --name=r1 --type=PARTITION \
--cache-loader=org.example.myLoader{'URL':'jdbc:cloudscape:rmi:MyData'}
Sample Output:
gfsh>create region --name=myRegion --type=LOCAL
Member | Status
------- | ---------------------------------------
server1 | Region "/myRegion" created on "server1"