This topic describes a typical sequence of tasks that you execute after starting gfsh
, the VMware Tanzu GemFire command-line interface.
Step 1: Create a scratch working directory and navigate to that directory. For example:
$ mkdir gfsh_tutorial
$ cd gfsh_tutorial
Step 1: Start a gfsh prompt.
$ gfsh
_________________________ __
/ _____/ ______/ ______/ /____/ /
/ / __/ /___ /_____ / _____ /
/ /__/ / ____/ _____/ / / / /
/______/_/ /______/_/ /_/
Monitor and Manage Tanzu GemFire
gfsh>
See Starting gfsh for details.
Step 2: Start up a locator. Enter the following command:
gfsh>start locator --name=locator1
The following output appears:
gfsh>start locator --name=locator1
Starting a GemFire Locator in /home/username/gfsh_tutorial/locator1...
......................
Locator in /home/username/gfsh_tutorial/locator1 on 192.168.100.102[10334] as locator1 is currently online.
Process ID: 96393
Uptime: 12 seconds
Geode Version: 10.1.0
Java Version: 11.0.10
Log File: /home/username/gfsh_tutorial/locator1/locator1.log
JVM Arguments: --add-exports=java.management/com.sun.jmx.remote.security=ALL-UNNAMED --add-exports=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /home/username/gfsh_tutorial/lib/gemfire-bootstrap-10.1.0.jar
Successfully connected to: JMX Manager [host=192.168.100.102, port=1099]
Cluster configuration service is up and running.
If you run start locator
from gfsh
without specifying the member name, gfsh
automatically provides a random member name. This is useful for automation.
In your file system, examine the folder location where you executed gfsh
. Notice that the start locator
command has automatically created a working directory (using the name of the locator), and within that working directory, it has created a log file, a status file, and a .pid (containing the locator’s process ID) for this locator.
In addition, because no other JMX Manager exists yet, notice that gfsh
has automatically started an embedded JMX Manager on port 1099 within the locator and has connected you to that JMX Manager.
Step 3: Examine the existing gfsh connection.
In the current shell, type the following command:
gfsh>describe connection
If you are connected to the JMX Manager started within the locator that you started in Step 2, the following output appears:
gfsh>describe connection
Connection Endpoints
---------------------
192.168.100.102[1099]
Notice that the JMX Manager is on port 1099
while the locator was assigned the default port of 10334
.
Step 4: Connect to the same locator/JMX Manager from a different terminal.
This step shows you how to connect to a locator/JMX Manager. Open a second terminal window, and start a second gfsh
prompt. Type the same command as you did in Step 3 in the second prompt:
gfsh>describe connection
This time, notice that you are not connected to a JMX Manager, and the following output appears:
gfsh>describe connection
Connection Endpoints
--------------------
Not connected
Type the following command in the second gfsh
terminal:
gfsh>connect
The command will connect you to the currently running local locator that you started in Step 2.
gfsh>connect
Connecting to Locator at [host=localhost, port=10334] ..
Connecting to Manager at [host=192.168.100.102, port=1099] ..
Successfully connected to: [host=192.168.100.102, port=1099]
You are connected to a cluster of version 10.1.0.
If you had used a custom --port
when starting your locator, or you were connecting from the gfsh
prompt on another member, you would also need to specify --locator=hostname[port]
when connecting to the cluster. For example (type disconnect
first if you want to try this next command):
gfsh>connect --locator=localhost[10334]
Connecting to Locator at [host=localhost, port=10334] ..
Connecting to Manager at [host=192.168.100.102, port=1099] ..
Successfully connected to: [host=192.168.100.102, port=1099]
You are connected to a cluster of version 10.1.0.
Another way to connect gfsh
to the cluster would be to connect to directly to the JMX Manager running inside the locator. For example (type disconnect
first if you want to try this next command):
gfsh>connect --jmx-manager=localhost[1099]
Connecting to Manager at [host=localhost, port=1099] ..
Successfully connected to: [host=localhost, port=1099]
You are connected to a cluster of version 10.1.0.
In addition, you can connect to remote clusters over HTTP. See Using gfsh to Manage a Remote Cluster Over HTTP or HTTPS.
Step 5: Disconnect and close the second terminal window. Type the following commands to disconnect and exit the second gfsh
prompt:
gfsh>disconnect
Disconnecting from: localhost[1099]
Disconnected from : localhost[1099]
gfsh>exit
Close the second terminal window.
Step 6: Start a server. Return to your first terminal window, and start a cache server that uses the locator that you started in Step 2. Enter the following command:
gfsh>start server --name=server1 --locators=localhost[10334]
If the server starts successfully, the following output appears:
gfsh>start server --name=server1 --locators=localhost[10334]
Starting a GemFire Server in /home/username/gfsh_tutorial/server1...
...
Server in /home/username/gfsh_tutorial/server1 on 192.168.100.102[40404] as server1 is currently online.
Process ID: 96851
Uptime: 3 seconds
Geode Version: 10.1.0
Java Version: 11.0.10
Log File: /home/username/gfsh_tutorial/server1/server1.log
JVM Arguments: --add-exports=java.management/com.sun.jmx.remote.security=ALL-UNNAMED --add-exports=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED -Dgemfire.start-dev-rest-api=false -Dgemfire.locators=localhost[10334] -Dgemfire.use-cluster-configuration=true -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /home/username/gfsh_tutorial/lib/gemfire-bootstrap-10.1.0.jar
If you run start server
from gfsh
without specifying the member name, gfsh
automatically provides a random member name. This is useful for automation.
In your file system, examine the folder location where you executed gfsh
. Notice that, just like the start locator
command, the start server
command has automatically created a working directory (named after the server), and within that working directory, it has created a log file and a .pid (containing the server’s process ID) for this cache server. In addition, it has also written log files.
Step 7: List members. Use the list members
command to view the current members of the cluster you have just created.
gfsh>list members
Member Count : 2
Name | Id | Type | Status
-------- | ----------------------------------------------------- | ------- | ------
locator1 | 192.168.100.102(locator1:96393:locator)<ec><v0>:463.. | Locator | Ready
server1 | 192.168.100.102(server1:96851)<v1>:43740 | Server | Ready
Step 8: View member details by executing the describe member
command.
gfsh>describe member --name=server1
Name : server1
Id : 192.168.100.102(server1:96851)<v1>:43740
Type : Server
Host : 192.168.100.102
Regions :
Timezone : America/Los_Angeles -08:00
Metrics URL : Not Available
PID : 96851
Groups :
Redundancy-Zone :
Used Heap : 32M
Max Heap : 8192M
Load Average1 : 3.22
Working Dir : /home/username/gfsh_tutorial/server1
Log file : /home/username/gfsh_tutorial/server1/server1.log
Locators : localhost[10334]
Cache Server Information
Server Bind :
Server Port : 40404
Running : true
Client Connections : 0
No regions have been assigned to this member yet.
Step 9: Create your first region. Enter the following command followed by the tab key:
gfsh>create region --name=region1 --type=
A list of possible region types appears:
gfsh>create region --name=region1 --type=
LOCAL LOCAL_HEAP_LRU
LOCAL_OVERFLOW LOCAL_PERSISTENT
LOCAL_PERSISTENT_OVERFLOW PARTITION
PARTITION_HEAP_LRU PARTITION_OVERFLOW
PARTITION_PERSISTENT PARTITION_PERSISTENT_OVERFLOW
PARTITION_PROXY PARTITION_PROXY_REDUNDANT
PARTITION_REDUNDANT PARTITION_REDUNDANT_HEAP_LRU
PARTITION_REDUNDANT_OVERFLOW PARTITION_REDUNDANT_PERSISTENT
PARTITION_REDUNDANT_PERSISTENT_OVERFLOW REPLICATE
REPLICATE_HEAP_LRU REPLICATE_OVERFLOW
REPLICATE_PERSISTENT
REPLICATE_PERSISTENT_OVERFLOW
REPLICATE_PROXY
gfsh>create region --name=region1 --type=
Complete the command with the type of region you want to create. For example, create a local region:
gfsh>create region --name=region1 --type=LOCAL
Member | Status | Message
------- | ------ | --------------------------------------
server1 | OK | Region "/region1" created on "server1"
Cluster configuration for group 'cluster' is updated.
Because only one server is in the cluster at the moment, the command creates the local region on server1.
Step 10: Start another server. This time specify a --server-port
argument with a different server port because you are starting a cache server process on the same host machine.
gfsh>start server --name=server2 --server-port=40405
Starting a GemFire Server in /home/username/gfsh_tutorial/server2...
...
Server in /home/username/gfsh_tutorial/server2 on 192.168.100.102[40405] as server2 is currently online.
Process ID: 97094
Uptime: 3 seconds
Geode Version: 10.1.0
Java Version: 11.0.10
Log File: /home/username/gfsh_tutorial/server2/server2.log
JVM Arguments: --add-exports=java.management/com.sun.jmx.remote.security=ALL-UNNAMED --add-exports=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED -Dgemfire.default.locators=192.168.100.102[10334] -Dgemfire.start-dev-rest-api=false -Dgemfire.use-cluster-configuration=true -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /home/username/gfsh_tutorial/lib/gemfire-bootstrap-10.1.0.jar
Step 11: Create a replicated region.
gfsh>create region --name=region2 --type=REPLICATE
Member | Status | Message
------- | ------ | --------------------------------------
server1 | OK | Region "/region2" created on "server1"
server2 | OK | Region "/region2" created on "server2"
Cluster configuration for group 'cluster' is updated.
Step 12: Create a partitioned region.
gfsh>create region --name=region3 --type=PARTITION
Member | Status | Message
------- | ------ | --------------------------------------
server1 | OK | Region "/region3" created on "server1"
server2 | OK | Region "/region3" created on "server2"
Cluster configuration for group 'cluster' is updated.
Step 13: Create a replicated, persistent region.
gfsh>create region --name=region4 --type=REPLICATE_PERSISTENT
Member | Status | Message
------- | ------ | --------------------------------------
server1 | OK | Region "/region4" created on "server1"
server2 | OK | Region "/region4" created on "server2"
Cluster configuration for group 'cluster' is updated.
Step 14: List regions. A list of all the regions you just created displays.
gfsh>list regions
List of regions
---------------
region1
region2
region3
region4
Step 15: View member details again by executing the describe member
command.
gfsh>describe member --name=server1
Name : server1
Id : 192.168.100.102(server1:96851)<v1>:43740
Type : Server
Host : 192.168.100.102
Regions : region1
region2
region3
region4
Timezone : America/Los_Angeles -08:00
Metrics URL : Not Available
PID : 96851
Groups :
Redundancy-Zone :
Used Heap : 58M
Max Heap : 8192M
Load Average1 : 2.27
Working Dir : /home/username/gfsh_tutorial/server1
Log file : /home/username/gfsh_tutorial/server1/server1.log
Locators : localhost[10334]
Cache Server Information
Server Bind :
Server Port : 40404
Running : true
Client Connections : 0
Notice that all the regions that you created now appear in the “Regions” section of the member description.
Note: The Metrics URL
specifies the location of your metrics module. If metrics are not enabled, or if there errors occurred starting the metrics endpoint, the Metrics URL
will show as Not Available
.
gfsh>describe member --name=server2
Name : server2
Id : 192.168.100.102(server2:97094)<v2>:43723
Type : Server
Host : 192.168.100.102
Regions : region1
region2
region3
region4
Timezone : America/Los_Angeles -08:00
Metrics URL : Not Available
PID : 97094
Groups :
Redundancy-Zone :
Used Heap : 57M
Max Heap : 8192M
Load Average1 : 2.03
Working Dir : /home/username/gfsh_tutorial/server2
Log file : /home/username/gfsh_tutorial/server2/server2.log
Locators : 192.168.100.102[10334]
Cache Server Information
Server Bind :
Server Port : 40405
Running : true
Client Connections : 0
Note that even though you brought up the second server after creating the first region (region1), the second server still lists region1 because it picked up its configuration from the cluster configuration service.
gfsh>describe member --name=locator1
Name : locator1
Id : 192.168.100.102(locator1:96393:locator)<ec><v0>:46363
Type : Locator
Host : 192.168.100.102
Regions :
Timezone : America/Los_Angeles -08:00
Metrics URL : Not Available
PID : 96393
Groups :
Redundancy-Zone :
Used Heap : 160M
Max Heap : 8192M
Load Average1 : 2.08
Working Dir : /home/username/gfsh_tutorial/locator1
Log file : /home/username/gfsh_tutorial/locator1/locator1.log
Locators : 192.168.100.102[10334]
Step 16: Put data in a local region. Enter the following put command:
gfsh>put --key=('123') --value=('ABC') --region=region1
Result : true
Key Class : java.lang.String
Key : ('123')
Value Class : java.lang.String
Old Value : <NULL>
Step 17: Put data in a replicated region. Enter the following put command:
gfsh>put --key=('123abc') --value="('Hello World!!')" --region=region2
Result : true
Key Class : java.lang.String
Key : ('123abc')
Value Class : java.lang.String
Old Value : <NULL>
Step 18: Retrieve data. You can use locate entry
, query
or get
to return the data you just put into the region.
For example, using the get
command:
gfsh>get --key=('123') --region=region1
Result : true
Key Class : java.lang.String
Key : ('123')
Value Class : java.lang.String
Value : ('ABC')
For example, using the locate entry
command:
gfsh>locate entry --key=('123abc') --region=region2
Result : true
Key Class : java.lang.String
Key : ('123abc')
Locations Found : 2
MemberName | MemberId
---------- | -------------------------------
server2 | ubuntu(server2:6092)<v2>:17443
server1 | ubuntu(server1:5931)<v1>:35285
Notice that because the entry was put into a replicated region, the entry is located on both cluster members.
For example, using the query
command:
gfsh>query --query='SELECT * FROM /region2'
Result : true
startCount : 0
endCount : 20
Rows : 1
Result
-----------------
('Hello World!!')
NEXT_STEP_NAME : END
Step 19: Export your data. To save region data, you can use the export data
command.
For example:
gfsh>export data --region=region1 --file=region1.gfd --member=server1
You can later use the import data
command to import that data into the same region on another member.
Step 20: Shutdown the cluster.
gfsh>shutdown --include-locators=true