This topic discusses using data with Spring Session for VMware GemFire.
One of the most important tasks during development is ensuring your Spring Boot application handles data correctly. To verify the accuracy, integrity, and availability of your data, your application needs data with which to work.
For those of you already familiar with Spring Boot’s support for SQL database initialization, the approach when using VMware GemFire should be easy to understand.
VMware GemFire provides built-in support, similar in function to Spring Boot’s SQL database initialization, by using:
Gfsh’s import/export data commands.
Persistence with disk storage.
For example, by enabling persistence with disk storage, you could backup and restore persistent DiskStore
files from one cluster to another.
Alternatively, using VMware GemFire’s Snapshot Service, you can export data contained in targeted Regions
from one cluster during shutdown and import the data into another cluster on startup. The Snapshot Service lets you filter data while it is being imported and exported.
Finally, you can use VMware GemFire shell (Gfsh) commands to export data and import data.
In all cases, the files generated by persistence, the Snapshot Service and Gfsh’s export
command are in a proprietary binary format.
Furthermore, none of these approaches are as convenient as Spring Boot’s database initialization automation. Therefore, Spring Boot for VMware GemFire offers support to import data from JSON into VMware GemFire as PDX.
Spring Boot for VMware GemFire offers support to export data as well. By default, data is imported and exported in JSON format.
Spring Boot for VMware GemFire does not provide an equivalent to Spring Boot’s
schema.sql
file. The best way to define the data structures (theRegion
instances) that manage your data is with Spring Data for VMware GemFire’s annotation-based configuration support for defining cacheRegion
instances from your application’s entity classes or indirectly from Spring and JSR-107 or JCache caching annotations.Warning: While this feature works and many edge cases were thought through and tested thoroughly, there are still some limitations. The Spring team strongly recommends that this feature be used only for development and testing purposes.
You can import data into a Region
by defining a JSON file that contain the JSON objects you wish to load. The JSON file must follow a predefined naming convention and be placed in the root of your application classpath:
data-<regionName>.json
Note: <regionName>
refers to the lowercase "name" of the Region
, as defined by Region.getName()
(see VMware GemFire Java API Reference).
For example, if you have a Region
named “Orders”, you would create a JSON file called data-orders.json
and place it in the root of your application classpath (for example, in src/test/resources
).
Create JSON files for each Region
that is implicitly defined (for example, by using @EnableEntityDefinedRegions
) or explicitly defined (with ClientRegionFactoryBean
in Java configuration) in your Spring Boot application configuration that you want to load with data.
The JSON file that contains JSON data for the “Orders” Region
might appear as follows:
Example 1. data-orders.json
[{
"@type": "example.app.pos.model.PurchaseOrder",
"id": 1,
"lineItems": [
{
"@type": "example.app.pos.model.LineItem",
"product": {
"@type": "example.app.pos.model.Product",
"name": "Apple iPad Pro",
"price": 1499.00,
"category": "SHOPPING"
},
"quantity": 1
},
{
"@type": "example.app.pos.model.LineItem",
"product": {
"@type": "example.app.pos.model.Product",
"name": "Apple iPhone 11 Pro Max",
"price": 1249.00,
"category": "SHOPPING"
},
"quantity": 2
}
]
}, {
"@type": "example.app.pos.model.PurchaseOrder",
"id": 2,
"lineItems": [
{
"@type": "example.app.pos.model.LineItem",
"product": {
"@type": "example.app.pos.model.Product",
"name": "Starbucks Vente Carmel Macchiato",
"price": 5.49,
"category": "SHOPPING"
},
"quantity": 1
}
]
}]
The application entity classes that matches the JSON data from the JSON file might look something like the following listing:
Example 2. Point-of-Sale (POS) Application Domain Model Classes
@Region("Orders")
class PurchaseOrder {
@Id
Long id;
List<LineItem> lineItems;
}
class LineItem {
Product product;
Integer quantity;
}
@Region("Products")
class Product {
String name;
Category category;
BigDecimal price;
}
As the preceding listings show, the object model and corresponding JSON can be arbitrarily complex with a hierarchy of objects that have complex types.
We want to draw your attention to a few other details contained in the object model and JSON.
@type
metadata fieldFirst, we declared a @type
JSON metadata field. This field does not map to any specific field or property of the application domain model class (such as PurchaseOrder
). Rather, it tells the framework and VMware GemFire’s JSON/PDX converter the type of object the JSON data would map to if you were to request an object (by calling PdxInstance.getObject()
).
Consider the following example:
Example 3. Deserializing PDX as an Object
@Repository
class OrdersRepository {
@Resource(name = "Orders")
Region<Long, PurchaseOrder> orders;
PurchaseOrder findBy(Long id) {
Object value = this.orders.get(id);
return value instanceof PurchaseOrder ? (PurchaseOrder) value
: value instanceof PdxInstance ? (PurchaseOrder) ((PdxInstance) value).getObject()
: null;
}
}
Basically, the @type
JSON metadata field informs the PdxInstance.getObject()
method about the type of Java object to which the JSON object maps. Otherwise, the PdxInstance.getObject()
method would silently return a PdxInstance
.
It is possible for VMware GemFire’s PDX serialization framework to return a PurchaseOrder
from Region.get(key)
as well, but it depends on the value of PDX’s read-serialized
, cache-level configuration setting, among other factors.
Note: When JSON is imported into a Region
as PDX, the PdxInstance.getClassName()
does not refer to a valid Java class. It is JSONFormatter.JSON_CLASSNAME
. As a result, Region
data access operations, such as Region.get(key)
, return a PdxInstance
and not a Java object.
Tip | You may need to proxy Region read data access operations (such as Region.get(key) ) by setting the Spring Boot for VMware GemFire property spring.boot.data.gemfire.cache.region.advice.enabled to true . When this property is set, Region instances are proxied to wrap a PdxInstance in a PdxInstanceWrapper to appropriately handle the PdxInstance.getObject() call in your application code. |
id
field and the @identifier
metadata fieldTop-level objects in your JSON must have an identifier, such as an id
field. This identifier is used as the identity and key of the object (or PdxInstance
) when stored in the Region
(for example, Region.put(key, object)
).
You may have noticed that the JSON for the “Orders” Region
shown earlier declared an id
field as the identifier:
Example 4. PurchaseOrder identifier (“id”)
[{
"@type": "example.app.pos.model.PurchaseOrder",
"id": 1,
...
This follows the same convention used in Spring Data. Typically, Spring Data mapping infrastructure looks for a POJO field or property annotated with @Id
. If no field or property is annotated with @Id
, the framework falls back to searching for a field or property named id
.
In Spring Data for VMware GemFire, this @Id
-annotated or id
-named field or property is used as the identifier and as the key for the object when storing it into a Region
.
However, what happens when an object or entity does not have a surrogate ID defined? Perhaps the application domain model class is appropriately using natural identifiers, which is common in practice.
Consider a Book
class defined as follows:
Example 5. Book class
@Region("Books")
class Book {
Author author;
@Id
ISBN isbn;
LocalDate publishedDate;
String title;
}
As declared in the Book
class, the identifier for Book
is its ISBN
, since the isbn
field was annotated with Spring Data’s @Id
mapping annotation. However, we cannot know this by searching for an @Id
annotation in JSON.
You might be tempted to argue that if the @type
metadata field is set, we would know the class type and could load the class definition to learn about the identifier. That is all fine until the class is not actually on the application classpath in the first place. This is one of the reasons why Spring Boot for VMware GemFire’s JSON support serializes JSON to VMware GemFire’s PDX format. There might not be a class definition, which would lead to a NoClassDefFoundError
or ClassNotFoundException
.
So, what then?
In this case, Spring Boot for VMware GemFire lets you declare the @identifier
JSON metadata field to inform the framework what to use as the identifier for the object.
Consider the following example:
Example 6. Using “@identifer”
{
"@type": "example.app.books.model.Book",
"@identifier": "isbn",
"author": {
"id": 1,
"name": "Josh Long"
},
"isbn": "978-1-449-374640-8",
"publishedDate": "2017-08-01",
"title": "Cloud Native Java"
}
The @identifier
JSON metadata field informs the framework that the isbn
field is the identifier for a Book
.
While the Spring team recommends that users should only use this feature when developing and testing their Spring Boot applications with VMware GemFire, you may still occasionally use this feature in production.
You might use this feature in production to preload a (REPLICATE) Region with reference data. Reference data is largely static, infrequently changing, and non-transactional. Preloading reference data is particularly useful when you want to warm the cache.
When you use this feature for development and testing purposes, you can put your Region
-specific JSON files in src/test/resources
. This ensures that the files are not included in your application artifact (such as a JAR or WAR) when built and deployed to production.
However, if you must use this feature to preload data in your production environment, you can still conditionally load data from JSON. To do so, configure the spring.boot.data.gemfire.cache.data.import.active-profiles
property set to the Spring profiles that must be active for the import to take effect.
Consider the following example:
Example 7. Conditional Importing JSON
# Spring Boot application.properties
spring.boot.data.gemfire.cache.data.import.active-profiles=DEV, QA
For import to have an effect in this example, you must specifically set the spring.profiles.active
property to one of the valid, active-profiles
listed in the import property (such as QA
). Only one needs to match.
Note: There are many ways to conditionally build application artifacts. You might prefer to handle this concern in your Gradle or Maven build.
Certain data stored in your application’s Regions
may be sensitive or confidential, and keeping the data secure is of the utmost concern and priority. Therefore, exporting data is disabled by default.
However, if you use this feature for development and testing purposes, enabling the export capability may be useful to move data from one environment to another. For example, if your QA team finds a bug in the application that uses a particular data set, they can export the data and pass it back to the development team to import in their local development environment to help debug the issue.
To enable export, set the spring.boot.data.gemfire.cache.data.export.enabled
property to true
:
Example 8. Enable Export
# Spring Boot application.properties
spring.boot.data.gemfire.cache.data.export.enabled=true
Spring Boot for VMware GemFire is careful to export data to JSON in a format that VMware GemFire expects on import and includes things such as @type
metadata fields.
Warning: The @identifier
metadata field is not generated automatically. While it is possible for POJOs stored in a Region
to include an @identifier
metadata field when exported to JSON, it is not possible when the Region
value is a PdxInstance
that did not originate from JSON. In this case, you must manually ensure that the PdxInstance
includes an @identifier
metadata field before it is exported to JSON if necessary (for example, Book.isbn
). This is only necessary if your entity classes do not declare an explicit identifier field, such as with the @Id
mapping annotation, or do not have an id
field. This scenario can also occur when interoperating with native clients that model the application domain objects differently and then serialize the objects by using PDX, storing them in Regions on the server that are then later consumed by your Java-based, Spring Boot application.
Warning: You may need to set the -Dgemfire.disableShutdownHook
JVM System property to true
before your Spring Boot application starts up when using export. Unfortunately, this Java runtime shutdown hook is registered and enabled in VMware GemFire by default, which results in the cache and the Regions being closed before the Spring Boot for VMware GemFire Export functionality can export the data, thereby resulting in a CacheClosedException
. Spring Boot for VMware GemFire makes a best effort to disable the VMware GemFire JVM shutdown hook when export is enabled, but it is at the mercy of the JVM ClassLoader
, since VMware GemFire's JVM shutdown hook registration is declared in a static
initializer.
The API in Spring Boot for VMware GemFire for import and export functionality is separated into the following concerns:
Data Format
Resource Resolving
Resource Reading
Resource Writing
By breaking each of these functions apart into separate concerns, a developer can customize each aspect of the import and export functions.
For example, you could import XML from the filesystem and then export JSON to a REST-based Web Service. By default, Spring Boot for VMware GemFire imports JSON from the classpath and exports JSON to the filesystem.
However, not all environments expose a filesystem, such as cloud environments like the Tanzu Platform for Cloud Foundry. Therefore, giving users control over each aspect of the import and export processes is essential for performing the functions in any environment.
The primary interface to import data into a Region
is CacheDataImporter
.
CacheDataImporter
is a @FunctionalInterface
that extends Spring’s BeanPostProcessor
interface to trigger the import of data after the Region
has been initialized.
The interface is defined as follows:
Example 9. CacheDataImporter
interface CacheDataImporter extends BeanPostProcessor {
Region importInto(Region region);
}
You can code the importInto(:Region)
method to handle any data format (JSON, XML, and others) you prefer. Register a bean that implements the CacheDataImporter
interface in the Spring container, and the importer does its job.
On the flip side, the primary interface to export data from a Region
is the CacheDataExporter
.
CacheDataExporter
is a @FunctionalInterface
that extends Spring’s DestructionAwareBeanPostProcessor
interface to trigger the export of data before the Region
is destroyed.
The interface is defined as follows:
Example 10. CacheDataExporter
interface CacheDataExporter extends DestructionAwareBeanPostProcessor {
Region exportFrom(Region region);
}
You can code the exportFrom(:Region)
method to handle any data format (JSON, XML, and others) you prefer. Register a bean implementing the CacheDataExporter
interface in the Spring container, and the exporter does its job.
For convenience, when you want to implement both import and export functionality, Spring Boot for VMware GemFire provides the CacheDataImporterExporter
interface, which extends both CacheDataImporter
and CacheDataExporter
:
Example 11. CacheDataImporterExporter
interface CacheDataImporterExporter extends CacheDataExporter, CacheDataImporter { }
For added support, Spring Boot for VMware GemFire also provides the AbstractCacheDataImporterExporter
abstract base class to simplify the implementation of your importer/exporter.
Sometimes, it is necessary to precisely control when data is imported or exported.
This is especially true on import, since different Region
instances may be collocated or tied together through a cache callback, such as a CacheListener
. In these cases, the other Region
may need to exist before the import on the dependent Region
proceeds, particularly if the dependencies were loosely defined.
Controlling the import is also important when you use Spring Boot for VMware GemFire’s @EnableClusterAware
annotation to push configuration metadata from the client to the cluster in order to define server-side Region
instances that match the client-side Region
instances, especially client Region
instances targeted for import. The matching Region
instances on the server side must exist before data is imported into client (PROXY
) Region
instances.
In all cases, Spring Boot for VMware GemFire provides the LifecycleAwareCacheDataImporterExporter
class to wrap your CacheDataImporterExporter
implementation. This class implements Spring’s SmartLifecycle
interface.
By implementing the SmartLifecycle
interface, you can control in which phase
of the Spring container the import occurs. Spring Boot for VMware GemFire also exposes two more properties to control the lifecycle:
Example 12. Lifecycle Management Properties
# Spring Boot application.properties
spring.boot.data.gemfire.cache.data.import.lifecycle=[EAGER|LAZY]
spring.boot.data.gemfire.cache.data.import.phase=1000000
EAGER
acts immediately, after the Region
is initialized (the default behavior). LAZY
delays the import until the start()
method is called, which is invoked according to the phase
, thereby ordering the import relative to the other lifecycle-aware components that are registered in the Spring container.
The following example shows how to make your CacheDataImporterExporter
lifecycle-aware:
@Configuration
class MyApplicationConfiguration {
@Bean
CacheDataImporterExporter importerExporter() {
return new LifecycleAwareCacheDataImporterExporter(new MyCacheDataImporterExporter());
}
}
Resolving resources used for import and export results in the creation of a Spring Resource
handle.
Resource resolution is a vital step to qualifying a resource, especially if the resource requires special logic or permissions to access it. In this case, specific Resource
handles can be returned and used by the reader and writer of the Resource
as appropriate for import or export operation.
Spring Boot for VMware GemFire encapsulates the algorithm for resolving Resources
in the ResourceResolver
(Strategy) interface:
Example 13. ResourceResolver
@FunctionalInterface
interface ResourceResolver {
Optional<Resource> resolve(String location);
default Resource required(String location) {
// ...
}
}
Additionally, Spring Boot for VMware GemFire provides the ImportResourceResolver
and ExportResourceResolver
marker interfaces and the AbstractImportResourceResolver
and AbstractExportResourceResolver
abstract base classes for implementing the resource resolution logic used by both import and export operations.
If you wish to customize the resolution of Resources
used for import or export, your CacheDataImporterExporter
implementation can extend the ResourceCapableCacheDataImporterExporter
abstract base class, which provides the aforementioned interfaces and base classes.
As stated earlier, Spring Boot for VMware GemFire resolves resources on import from the classpath and resources on export to the filesystem.
You can customize this behavior by providing an implementation of ImportResourceResolver
, ExportResourceResolver
, or both interfaces and declare instances as beans in the Spring context:
Example 14. Import and Export ResourceResolver beans
@Configuration
class MyApplicationConfiguration {
@Bean
ImportResourceResolver importResourceResolver() {
return new MyImportResourceResolver();
}
@Bean
ExportResourceResolver exportResourceResolver() {
return new MyExportResourceResolver();
}
}
Tip | If you need to customize the resource resolution process for each location (or Region ) on import or export, you can use the Composite software design pattern. |
If you are content with the provided defaults but want to target specific locations on the classpath or filesystem used by the import or export, Spring Boot for VMware GemFire additionally provides the following properties:
Example 15. Import/Export Resource Location Properties
# Spring Boot application.properties
spring.boot.data.gemfire.cache.data.import.resource.location=...
spring.boot.data.gemfire.cache.data.export.resource.location=...
The properties accept any valid resource string, as specified in the Spring documentation (see Table 10. Resource strings).
This means that, even though import defaults from the classpath, you can change the location from classpath to filesystem, or even network (for example, https://) by changing the prefix (or protocol).
Import/export resource location properties can refer to other properties through property placeholders, but Spring Boot for VMware GemFire further lets you use SpEL inside the property values.
Consider the following example:
Example 16. Using SpEL
# Spring Boot application.properties
spring.boot.data.gemfire.cache.data.import.resource.location=\
https://#{#env['user.name']}:#{someBean.lookupPassword(#env['user.name'])}@#{host}:#{port}/cache/#{#regionName}/data/import
In this case, the import resource location refers to a rather sophisticated resource string by using a complex SpEL expression.
Spring Boot for VMware GemFire populates the SpEL EvaluationContext
with three sources of information:
Access to the Spring BeanFactory
Access to the Spring Environment
Access to the current Region
Simple Java System properties or environment variables can be accessed with the following expression:
#{propertyName}
You can access more complex property names (including properties that use dot notation, such as the user.home
Java System property), directly from the Environment
by using map style syntax as follows:
#{#env['property.name']}
The #env
variable is set in the SpEL EvaluationContext
to the Spring Environment
.
Because the SpEL EvaluationContext
is evaluated with the Spring ApplicationContext
as the root object, you also have access to the beans declared and registered in the Spring container and can invoke methods on them, as shown earlier with someBean.lookupPassword(..)
. someBean
must be the name of the bean as declared and registered in the Spring container.
Note: Be careful when accessing beans declared in the Spring container with SpEL, particularly when using EAGER
import, as it may force those beans to be eagerly (or even prematurely) initialized.
Spring Boot for VMware GemFire also sets the #regionName
variable in the EvaluationContext
to the name of the Region
, as determined by Region.getName()
(see VMware GemFire Java API Reference), targeted for import and export.
This lets you not only change the location of the resource but also change the resource name (such as a filename).
Consider the following example:
Example 17. Using #regionName
# Spring Boot application.properties
spring.boot.data.gemfire.cache.data.export.resource.location=\
file://#{#env['user.home']}/gemfire/cache/data/custom-filename-for-#{#regionName}.json
Note: By default, the exported file is stored in the working directory (System.getProperty("user.dir")
) of the Spring Boot application process.
Tip | See the Spring Framework documentation for more information about SpEL. |
The Spring Resource
handle specifies tion of a resource, not how the resource is read or written. Even the Spring ResourceLoader
, which is an interface for loading Resources
, does not specifically read or write any content to the Resource
.
Spring Boot for VMware GemFire separates these concerns into two interfaces: ResourceReader
and ResourceWriter
, respectively. The design follows the same pattern used by Java’s InputStream/OutputStream
and Reader/Writer
classes in the java.io
package.
The ResourceReader
interfaces is defined as:
Example 18. ResourceReader
@FunctionalInterface
interface ResourceReader {
byte[] read(Resource resource);
}
The ResourceWriter
interfaces is defined as:
Example 19. ResourceWriter
@FunctionalInterface
interface ResourceWriter {
void write(Resource resource, byte[] data);
}
Both interfaces provide additional methods to compose readers and writers, much like Java’s Consumer
and Function
interfaces in the java.util.function
package. If a particular reader or writer is used in a composition and is unable to handle the given Resource
, it should throw a UnhandledResourceException
to let the next reader or writer in the composition try to read from or write to the Resource
.
The reader or writer are free to throw a ResourceReadException
or ResourceWriteException
to break the chain of reader and writer invocations in the composition.
To override the default export/import reader and writer used by Spring Boot for VMware GemFire, you can implement the ResourceReader
or ResourceWriter
interfaces as appropriate and declare instances of these classes as beans in the Spring container:
Example 20. Custom ResourceReader
and ResourceWriter
beans
@Configuration
class MyApplicationConfiguration {
@Bean
ResourceReader myResourceReader() {
return new MyResourceReader()
.thenReadFrom(new MyOtherResourceReader());
}
@Bean
ResourceWriter myResourceWriter() {
return new MyResourceWriter();
}
}