The Greenplum Platform Extension Framework (PXF) provides connectors that enable you to access data stored in sources external to your Greenplum Database deployment. These connectors map an external data source to a Greenplum Database external table definition. When you create the Greenplum Database external table, you identify the external data store and the format of the data via a server name and a profile name that you provide in the command.
You can query the external table via Greenplum Database, leaving the referenced data in place. Or, you can use the external table to load the data into Greenplum Database for higher performance.
PXF is compatible with these operating system platforms and Greenplum Database versions:
OS Version | Greenplum Version |
---|---|
RHEL 7.x, CentOS 7.x | 5.21.2+, 6.x |
OEL 7.x, Ubuntu 18.04 LTS | 6.x |
RHEL 8.x | 6.20+, 7.x |
RHEL 9.x | 6.26+ |
PXF supports Java 8 and Java 11.
PXF bundles all of the Hadoop JAR files on which it depends, and supports the following Hadoop component versions:
PXF Version | Hadoop Version | Hive Server Version | HBase Server Version |
---|---|---|---|
6.x | 2.x, 3.1+ | 1.x, 2.x, 3.1+ | 1.3.2 |
5.9+ | 2.x, 3.1+ | 1.x, 2.x, 3.1+ | 1.3.2 |
5.8 | 2.x | 1.x | 1.3.2 |
Your Greenplum Database deployment consists of a coordinator host, a standby coordinator host, and multiple segment hosts. A single PXF Service process runs on each Greenplum Database host. The PXF Service process running on a segment host allocates a worker thread for each segment instance on the host that participates in a query against an external table. The PXF Services on multiple segment hosts communicate with the external data store in parallel. The PXF Service process running on the coordinator and standby coordinator hosts are not currently involved in data transfer; these processes may be used for other purposes in the future.
Connector is a generic term that encapsulates the implementation details required to read from or write to an external data store. PXF provides built-in connectors to Hadoop (HDFS, Hive, HBase), object stores (Azure, Google Cloud Storage, MinIO, AWS S3, and Dell ECS), and SQL databases (via JDBC).
A PXF Server is a named configuration for a connector. A server definition provides the information required for PXF to access an external data source. This configuration information is data-store-specific, and may include server location, access credentials, and other relevant properties.
The Greenplum Database administrator will configure at least one server definition for each external data store that they will allow Greenplum Database users to access, and will publish the available server names as appropriate.
You specify a SERVER=<server_name>
setting when you create the external table to identify the server configuration from which to obtain the configuration and credentials to access the external data store.
The default PXF server is named default
(reserved), and when configured provides the location and access information for the external data source in the absence of a SERVER=<server_name>
setting.
Finally, a PXF profile is a named mapping identifying a specific data format or protocol supported by a specific external data store. PXF supports text, Avro, JSON, RCFile, Parquet, SequenceFile, and ORC data formats, and the JDBC protocol, and provides several built-in profiles as discussed in the following section.
PXF implements a Greenplum Database protocol named pxf
that you can use to create an external table that references data in an external data store. The syntax for a CREATE EXTERNAL TABLE command that specifies the pxf
protocol follows:
CREATE [WRITABLE] EXTERNAL TABLE <table_name>
( <column_name> <data_type> [, ...] | LIKE <other_table> )
LOCATION('pxf://<path-to-data>?PROFILE=<profile_name>[&SERVER=<server_name>][&<custom-option>=<value>[...]]')
FORMAT '[TEXT|CSV|CUSTOM]' (<formatting-properties>);
The LOCATION
clause in a CREATE EXTERNAL TABLE
statement specifying the pxf
protocol is a URI. This URI identifies the path to, or other information describing, the location of the external data. For example, if the external data store is HDFS, the <path-to-data> identifies the absolute path to a specific HDFS file. If the external data store is Hive, <path-to-data> identifies a schema-qualified Hive table name.
You use the query portion of the URI, introduced by the question mark (?), to identify the PXF server and profile names.
PXF may require additional information to read or write certain data formats. You provide profile-specific information using the optional <custom-option>=<value> component of the LOCATION
string and formatting information via the <formatting-properties> component of the string. The custom options and formatting properties supported by a specific profile vary; they are identified in usage documentation for the profile.
Keyword | Value and Description |
---|---|
<path‑to‑data> | A directory, file name, wildcard pattern, table name, etc. The syntax of <path-to-data> is dependent upon the external data source. |
PROFILE=<profile_name> | The profile that PXF uses to access the data. PXF supports profiles that access text, Avro, JSON, RCFile, Parquet, SequenceFile, and ORC data in Hadoop services, object stores, network file systems, and other SQL databases. |
SERVER=<server_name> | The named server configuration that PXF uses to access the data. PXF uses the default server if not specified. |
<custom‑option>=<value> | Additional options and their values supported by the profile or the server. |
FORMAT <value> | PXF profiles support the TEXT , CSV , and CUSTOM formats. |
<formatting‑properties> | Formatting properties supported by the profile; for example, the FORMATTER or delimiter . |
Note: When you create a PXF external table, you cannot use the HEADER
option in your formatter specification.
Certain PXF connectors and profiles support filter pushdown and column projection. Refer to the following topics for detailed information about this support: