The PXF object store connectors support reading Avro-format data. This section describes how to use PXF to read and write Avro data in an object store, including how to create, query, and insert into an external table that references an Avro file in the store.

Note: Accessing Avro-format data from an object store is very similar to accessing Avro-format data in HDFS. This topic identifies object store-specific information required to read Avro data, and links to the PXF HDFS Avro documentation where appropriate for common information.

Prerequisites

Ensure that you have met the PXF Object Store Prerequisites before you attempt to read data from an object store.

Working with Avro Data

Refer to Working with Avro Data in the PXF HDFS Avro documentation for a description of the Apache Avro data serialization framework.

When you read or write Avro data in an object store:

  • If the Avro schema file resides in the object store:

    • You must include the bucket in the schema file path. This bucket need not specify the same bucket as the Avro data file.
    • The secrets that you specify in the SERVER configuration must provide access to both the data file and schema file buckets.
  • The schema file path must not include spaces.

Creating the External Table

Use the <objstore>:avro profiles to read and write Avro-format files in an object store. PXF supports the following <objstore> profile prefixes:

Object Store Profile Prefix
Azure Blob Storage wasbs
Azure Data Lake adl
Google Cloud Storage gs
MinIO s3
S3 s3

The following syntax creates a Greenplum Database external table that references an Avro-format file:

CREATE [WRITABLE] EXTERNAL TABLE <table_name>
    ( <column_name> <data_type> [, ...] | LIKE <other_table> )
LOCATION ('pxf://<path-to-file>?PROFILE=<objstore>:avro&SERVER=<server_name>[&<custom-option>=<value>[...]]')
FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import'|'pxfwritable_export');

The specific keywords and values used in the Greenplum Database CREATE EXTERNAL TABLE command are described in the table below.

Keyword Value
<path‑to‑file> The path to the directory or file in the object store. When the <server_name> configuration includes a pxf.fs.basePath property setting, PXF considers <path‑to‑file> to be relative to the base path specified. Otherwise, PXF considers it to be an absolute path. <path‑to‑file> must not specify a relative path nor include the dollar sign ($) character.
PROFILE=<objstore>:avro The PROFILE keyword must identify the specific object store. For example, s3:avro.
SERVER=<server_name> The named server configuration that PXF uses to access the data.
<custom‑option>=<value> Avro-specific custom options are described in the PXF HDFS Avro documentation
FORMAT ‘CUSTOM’ Use FORMATCUSTOM’ with (FORMATTER='pxfwritable_export') (write) or (FORMATTER='pxfwritable_import') (read).

If you are accessing an S3 object store, you can provide S3 credentials via custom options in the CREATE EXTERNAL TABLE command as described in Overriding the S3 Server Configuration with DDL.

Example

Refer to Example: Reading Avro Data in the PXF HDFS Avro documentation for an Avro example. Modifications that you must make to run the example with an object store include:

  • Copying the file to the object store instead of HDFS. For example, to copy the file to S3:

    $ aws s3 cp /tmp/pxf_avro.avro s3://BUCKET/pxf_examples/
    
  • Using the CREATE EXTERNAL TABLE syntax and LOCATION keywords and settings described above. For example, if your server name is s3srvcfg:

    CREATE EXTERNAL TABLE pxf_s3_avro(id bigint, username text, followers text[], fmap text, relationship text, address text)
      LOCATION ('pxf://BUCKET/pxf_examples/pxf_avro.avro?PROFILE=s3:avro&SERVER=s3srvcfg&COLLECTION_DELIM=,&MAPKEY_DELIM=:&RECORDKEY_DELIM=:')
    FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
    

You make similar modifications to follow the steps in Example: Writing Avro Data.

check-circle-line exclamation-circle-line close-line
Scroll to top icon