Use the PXF HDFS connector to read and write Parquet-format data. This section describes how to read and write HDFS files that are stored in Parquet format, including how to create, query, and insert into external tables that reference files in the HDFS data store.
PXF supports reading or writing Parquet files compressed with these codecs: snappy
, gzip
, and lzo
.
PXF currently supports reading and writing primitive Parquet data types only.
Ensure that you have met the PXF Hadoop Prerequisites before you attempt to read data from or write data to HDFS.
To read and write Parquet primitive data types in Greenplum Database, map Parquet data values to Greenplum Database columns of the same type.
Parquet supports a small set of primitive data types, and uses metadata annotations to extend the data types that it supports. These annotations specify how to interpret the primitive type. For example, Parquet stores both INTEGER
and DATE
types as the INT32
primitive type. An annotation identifies the original type as a DATE
.
PXF uses the following data type mapping when reading Parquet data:
Parquet Physical Type | Parquet Logical Type | PXF/Greenplum Data Type |
---|---|---|
boolean | – | Boolean |
binary (byte_array) | – | Bytea |
binary (byte_array) | Date | Date |
binary (byte_array) | Timestamp_millis | Timestamp |
binary (byte_array) | UTF8 | Text |
double | – | Float8 |
fixed_len_byte_array | Decimal | Numeric |
float | – | Real |
int32 | int_8 | Smallint |
int32 | Date | Date |
int32 | Decimal | Numeric |
int32 | – | Integer |
int64 | Decimal | Numeric |
int64 | – | Bigint |
int96 | – | Timestamp |
Note: PXF supports filter predicate pushdown on all parquet data types listed above, except the fixed_len_byte_array
and int96
types.
PXF can read a Parquet LIST
nested type when it represents a one-dimensional array of certain Parquet types. The supported mappings follow:
Parquet Data Type | PXF/Greenplum Data Type |
---|---|
list of <boolean> | Boolean[] |
list of <binary> | Bytea[] |
list of <binary> (Date) | Date[] |
list of <binary> (Timestamp_millis) | Timestamp[] |
list of <binary> (UTF8) | Text[] |
list of <double> | Float8[] |
list of <fixed_len_byte_array> (Decimal) | Numeric[] |
list of <float> | Real[] |
list of <int32> (int_8) | Smallint[] |
list of <int32> (Date) | Date[] |
list of <int32> (Decimal) | Numeric[] |
list of <int32> | Integer[] |
list of <int64> (Decimal) | Numeric[] |
list of <int64> | Bigint[] |
list of <int96> | Timestamp[] |
PXF uses the following data type mapping when writing Parquet data:
PXF/Greenplum Data Type | Parquet Physical Type | Parquet Logical Type |
---|---|---|
Bigint | int64 | – |
Boolean | boolean | – |
Bpchar1 | binary (byte_array) | UTF8 |
Bytea | binary (byte_array) | – |
Date | int32 | Date |
Float8 | double | – |
Integer | int32 | – |
Numeric/Decimal | fixed_len_byte_array | Decimal |
Real | float | – |
SmallInt | int32 | int_8 |
Text | binary (byte_array) | UTF8 |
Timestamp2 | int96 | – |
Timestamptz3 | int96 | – |
Varchar | binary (byte_array) | UTF8 |
OTHERS | UNSUPPORTED | – |
1 Because Parquet does not save the field length, a Bpchar that PXF writes to Parquet will be a text of undefined length.
2 PXF localizes a Timestamp
to the current system time zone and converts it to universal time (UTC) before finally converting to int96
.
3 PXF converts a Timestamptz
to a UTC timestamp
and then converts to int96
. PXF loses the time zone information during this conversion.
PXF can write a one-dimensional LIST
of certain Parquet data types. The supported mappings follow:
PXF/Greenplum Data Type | Parquet Data Type |
---|---|
Bigint[] | list of <int64> |
Boolean[] | list of <boolean> |
Bpchar[]1 | list of <binary> (UTF8) |
Bytea[] | list of <binary> |
Date[] | list of <int32> (Date) |
Float8[] | list of <double> |
Integer[] | list of <int32> |
Numeric[]/Decimal[] | list of <fixed_len_byte_array> (Decimal) |
Real[] | list of <float> |
SmallInt[] | list of <int32> (int_8) |
Text[] | list of <binary> (UTF8) |
Timestamp[]2 | list of <int96> |
Timestamptz[]3 | list of <int96> |
Varchar[] | list of <binary> (UTF8) |
OTHERS | UNSUPPORTED |
Parquet is a columnar storage format. A Parquet data file contains a compact binary representation of the data. The schema defines the structure of the data, and is composed of the same primitive and complex types identified in the data type mapping section above.
A Parquet data file includes an embedded schema. You can choose to provide the schema that PXF uses to write the data to HDFS via the SCHEMA
custom option in the CREATE WRITABLE EXTERNAL TABLE
LOCATION
clause (described below):
External Table Type | SCHEMA Specified? |
Behaviour |
---|---|---|
writable | yes | PXF uses the specified schema. |
writable | no | PXF creates the Parquet schema based on the external table definition. |
When you provide the Parquet schema file to PXF, you must specify the absolute path to the file, and the file must reside on the Hadoop file system.
The PXF HDFS connector hdfs:parquet
profile supports reading and writing HDFS data in Parquet-format. When you insert records into a writable external table, the block(s) of data that you insert are written to one or more files in the directory that you specified.
Use the following syntax to create a Greenplum Database external table that references an HDFS directory:
CREATE [WRITABLE] EXTERNAL TABLE <table_name>
( <column_name> <data_type> [, ...] | LIKE <other_table> )
LOCATION ('pxf://<path-to-hdfs-dir>
?PROFILE=hdfs:parquet[&SERVER=<server_name>][&<custom-option>=<value>[...]]')
FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import'|'pxfwritable_export')
[DISTRIBUTED BY (<column_name> [, ... ] ) | DISTRIBUTED RANDOMLY];
The specific keywords and values used in the Greenplum Database CREATE EXTERNAL TABLE command are described in the table below.
Keyword | Value |
---|---|
<path‑to‑hdfs‑file> | The path to the directory in the HDFS data store. When the <server_name> configuration includes a pxf.fs.basePath property setting, PXF considers <path‑to‑hdfs‑file> to be relative to the base path specified. Otherwise, PXF considers it to be an absolute path. <path‑to‑hdfs‑file> must not specify a relative path nor include the dollar sign ($ ) character. |
PROFILE | The PROFILE keyword must specify hdfs:parquet . |
SERVER=<server_name> | The named server configuration that PXF uses to access the data. PXF uses the default server if not specified. |
<custom‑option> | <custom-option>s are described below. |
FORMAT ‘CUSTOM’ | Use FORMAT ‘CUSTOM ’ with (FORMATTER='pxfwritable_export') (write) or (FORMATTER='pxfwritable_import') (read). |
DISTRIBUTED BY | If you want to load data from an existing Greenplum Database table into the writable external table, consider specifying the same distribution policy or <column_name> on both tables. Doing so will avoid extra motion of data between segments on the load operation. |
The PXF hdfs:parquet
profile supports the following read option. You specify this option in the CREATE EXTERNAL TABLE
LOCATION
clause:
Read Option | Value Description |
---|---|
IGNORE_MISSING_PATH | A Boolean value that specifies the action to take when <path-to-hdfs-file> is missing or invalid. The default value is false , PXF returns an error in this situation. When the value is true , PXF ignores missing path errors and returns an empty fragment. |
The PXF hdfs:parquet
profile supports encoding- and compression-related write options. You specify these write options in the CREATE WRITABLE EXTERNAL TABLE
LOCATION
clause. The hdfs:parquet
profile supports the following custom write options:
Write Option | Value Description |
---|---|
COMPRESSION_CODEC | The compression codec alias. Supported compression codecs for writing Parquet data include: snappy , gzip , lzo , and uncompressed . If this option is not provided, PXF compresses the data using snappy compression. |
ROWGROUP_SIZE | A Parquet file consists of one or more row groups, a logical partitioning of the data into rows. ROWGROUP_SIZE identifies the size (in bytes) of the row group. The default row group size is 8 * 1024 * 1024 bytes. |
PAGE_SIZE | A row group consists of column chunks that are divided up into pages. PAGE_SIZE is the size (in bytes) of such a page. The default page size is 1 * 1024 * 1024 bytes. |
ENABLE_DICTIONARY | A boolean value that specifies whether or not to enable dictionary encoding. The default value is true ; dictionary encoding is enabled when PXF writes Parquet files. |
DICTIONARY_PAGE_SIZE | When dictionary encoding is enabled, there is a single dictionary page per column, per row group. DICTIONARY_PAGE_SIZE is similar to PAGE_SIZE , but for the dictionary. The default dictionary page size is 1 * 1024 * 1024 bytes. |
PARQUET_VERSION | The Parquet version; PXF supports the values v1 and v2 for this option. The default Parquet version is v1 . |
SCHEMA | The absolute path to the Parquet schema file on the Greenplum host or on HDFS. |
Note: You must explicitly specify uncompressed
if you do not want PXF to compress the data.
Parquet files that you write to HDFS with PXF have the following naming format: <file>.<compress_extension>.parquet
, for example 1547061635-0000004417_0.gz.parquet
.
This example utilizes the data schema introduced in Example: Reading Text Data on HDFS and adds a new column, item_quantity_per_order
, an array with length equal to number_of_orders
, that identifies the number of items in each order.
Column Name | Data Type |
---|---|
location | text |
month | text |
number_of_orders | int |
item_quantity_per_order | int[] |
total_sales | float8 |
In this example, you create a Parquet-format writable external table that uses the default PXF server to reference Parquet-format data in HDFS, insert some data into the table, and then create a readable external table to read the data.
Use the hdfs:parquet
profile to create a writable external table. For example:
postgres=# CREATE WRITABLE EXTERNAL TABLE pxf_tbl_parquet (location text, month text, number_of_orders int, item_quantity_per_order int[], total_sales double precision)
LOCATION ('pxf://data/pxf_examples/pxf_parquet?PROFILE=hdfs:parquet')
FORMAT 'CUSTOM' (FORMATTER='pxfwritable_export');
Write a few records to the pxf_parquet
HDFS directory by inserting directly into the pxf_tbl_parquet
table. For example:
postgres=# INSERT INTO pxf_tbl_parquet VALUES ( 'Frankfurt', 'Mar', 3, '{1,11,111}', 3956.98 );
postgres=# INSERT INTO pxf_tbl_parquet VALUES ( 'Cleveland', 'Oct', 2, '{3333,7777}', 96645.37 );
Recall that Greenplum Database does not support directly querying a writable external table. To read the data in pxf_parquet
, create a readable external Greenplum Database referencing this HDFS directory:
postgres=# CREATE EXTERNAL TABLE read_pxf_parquet(location text, month text, number_of_orders int, item_quantity_per_order int[], total_sales double precision)
LOCATION ('pxf://data/pxf_examples/pxf_parquet?PROFILE=hdfs:parquet')
FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
Query the readable external table read_pxf_parquet
:
postgres=# SELECT * FROM read_pxf_parquet ORDER BY total_sales;
location | month | number_of_orders | item_quantity_per_order | total_sales
-----------+-------+------------------+-------------------------+-------------
Frankfurt | Mar | 777 | {1,11,111} | 3956.98
Cleveland | Oct | 3812 | {3333,7777} | 96645.4
(2 rows)
PXF uses the HiveDecimal
class to write numeric Parquet data. HiveDecimal
limits both the precision and the scale of a numeric type to a maximum of 38.
When you define a NUMERIC
column in an external table without specifying a precision or scale, PXF internally maps the column to a DECIMAL(38, 18)
.
PXF handles the following precision overflow conditions:
NUMERIC
column in the external table, and the integer digit count of a value exceeds the maximum supported precision of 38. For example, 1234567890123456789012345678901234567890.12345
, which has an integer digit count of 45.NUMERIC(<precision>)
column with a <precision>
greater than 38. For example, NUMERIC(55)
.NUMERIC
column in the external table, and the integer digit count of a value is greater than 20 (38-18). For example, 123456789012345678901234567890.12345
, which has an integer digit count of 30.If you define a NUMERIC(<precision>, <scale>)
column and the integer digit count of a value is greater than <precision> - <scale>
, PXF returns an error. For example, you define a NUMERIC(20,4)
column and the value is 12345678901234567.12
, which has an integer digit count of 19, which is greater than 20-4=16.
PXF can take one of three actions when it detects an overflow while writing numeric data to a Parquet file: round the value (the default), return an error, or ignore the overflow. The pxf.parquet.write.decimal.overflow
property in the pxf-site.xml
server configuration governs PXF’s action in this circumstance; valid values for this property follow:
Value | PXF Action |
---|---|
round |
When PXF encounters an overflow, it attempts to round the value to meet both precision and scale requirements before writing. PXF reports an error if rounding fails. This may potentially leave an incomplete data set in the external system. round is the default. |
error |
PXF reports an error when it encounters an overflow, and the transaction fails. |
ignore |
PXF attempts to round the value to meet both precision and scale requirements; otherwise PXF writes a NULL value. (This was PXF’s behavior prior to version 6.6.0.) |
PXF logs a warning when it detects an overflow and the pxf.parquet.write.decimal.overflow
property is set to ignore
.