Runs a load job as defined in a YAML formatted control file.
gpload -f <control_file> [-l <log_file>] [-h <hostname>] [-p <port>]
[-U <username>] [-d <database>] [-W] [--gpfdist_timeout <seconds>]
[--no_auto_trans] [--max_retries <retry_times>] [[-v | -V] [-q]] [-D]
gpload -?
gpload --version
The client machine where gpload
is run must have the following:
$PATH
. This program is located in $GPHOME/bin
of your Greenplum Database server installation.gpload
is a data loading utility that acts as an interface to the Greenplum Database external table parallel loading feature. Using a load specification defined in a YAML formatted control file, gpload
runs a load by invoking the Greenplum Database parallel file server (gpfdist), creating an external table definition based on the source data defined, and running an INSERT
, UPDATE
or MERGE
operation to load the source data into the target table in the database.
Note
gpfdist
is compatible only with the Greenplum Database major version in which it is shipped. For example, agpfdist
utility that is installed with Greenplum Database 4.x cannot be used with Greenplum Database 5.x or 6.x.
NoteThe Greenplum Database 5.22 and later
gpload
for Linux is compatible with Greenplum Database 6.x. The Greenplum Database 6.xgpload
for both Linux and Windows is compatible with Greenplum 5.x.
Note
MERGE
andUPDATE
operations are not supported if the target table column name is a reserved keyword, has capital letters, or includes any character that requires quotes (" ") to identify the column.
The operation, including any SQL commands specified in the SQL
collection of the YAML control file (see Control File Format), are performed as a single transaction to prevent inconsistent data when performing multiple, simultaneous load operations on a target table.
gpfdist
parallel file distribution program to send a response. Enter a value from
0
to
30
seconds (entering "
0
" to deactivates timeouts). Note that you might need to increase this value when operating on high-traffic networks.
~/gpAdminLogs/gpload_YYYYMMDD
. For more information about the log file, see
Log File Format.
Specify --no_auto_trans
to deactivate processing the load operation as a single transaction if you are performing a single load operation on the target table.
gpload
processes each load operation as a single transaction to prevent inconsistent data when performing multiple, simultaneous operations on a target table.
Connection Options
$PGDATABASE
or defaults to the current system user name.
$PGHOST
or defaults to
localhost
.
$PGPORT
or defaults to 5432.
gpload
attempts to connect to Greenplum Database after a connection timeout. The default value is
0
, do not attempt to connect after a connection timeout. A negative integer, such as
-1
, specifies an unlimited number of attempts.
$PGUSER
or defaults to the current system user name.
$PGPASSWORD
or from a password file specified by
$PGPASSFILE
or in
~/.pgpass
. If these are not set, then
gpload
will prompt for a password even if
-W
is not supplied.
The gpload
control file uses the YAML 1.1 document format and then implements its own schema for defining the various steps of a Greenplum Database load operation. The control file must be a valid YAML document.
The gpload
program processes the control file document in order and uses indentation (spaces) to determine the document hierarchy and the relationships of the sections to one another. The use of white space is significant. White space should not be used simply for formatting purposes, and tabs should not be used at all.
The basic structure of a load control file is:
---
VERSION: 1.0.0.1
DATABASE: <db_name>
USER: <db_username>
HOST: <master_hostname>
PORT: <master_port>
GPLOAD:
INPUT:
- SOURCE:
LOCAL_HOSTNAME:
- <hostname_or_ip>
PORT: <http_port>
| PORT_RANGE: [<start_port_range>, <end_port_range>]
FILE:
- </path/to/input_file>
SSL: true | false
CERTIFICATES_PATH: </path/to/certificates>
- FULLY_QUALIFIED_DOMAIN_NAME: true | false
- COLUMNS:
- <field_name>: <data_type>
- TRANSFORM: '<transformation>'
- TRANSFORM_CONFIG: '<configuration-file-path>'
- MAX_LINE_LENGTH: <integer>
- FORMAT: text | csv
- DELIMITER: '<delimiter_character>'
- ESCAPE: '<escape_character>' | 'OFF'
- NEWLINE: 'LF' | 'CR' | 'CRLF'
- NULL_AS: '<null_string>'
- FILL_MISSING_FIELDS: true | false
- FORCE_NOT_NULL: true | false
- QUOTE: '<csv_quote_character>'
- HEADER: true | false
- ENCODING: <database_encoding>
- ERROR_LIMIT: <integer>
- LOG_ERRORS: true | false
EXTERNAL:
- SCHEMA: <schema> | '%'
OUTPUT:
- TABLE: <schema.table_name>
- MODE: insert | update | merge
- MATCH_COLUMNS:
- <target_column_name>
- UPDATE_COLUMNS:
- <target_column_name>
- UPDATE_CONDITION: '<boolean_condition>'
- MAPPING:
<target_column_name>: <source_column_name> | '<expression>'
PRELOAD:
- TRUNCATE: true | false
- REUSE_TABLES: true | false
- STAGING_TABLE: <external_table_name>
- FAST_MATCH: true | false
SQL:
- BEFORE: "<sql_command>"
- AFTER: "<sql_command>"
gpload
control file schema. The current version is 1.0.0.1.
$PGDATABASE
if set or the current system user name. You can also specify the database on the command line using the
-d
option.
Optional. Specifies which database role to use to connect. If not specified, defaults to the current user or $PGUSER
if set. You can also specify the database role on the command line using the -U
option.
gpload
is not a Greenplum Database superuser, then the appropriate rights must be granted to the user for the load to be processed. See the
Greenplum Database Reference Guide for more information.
$PGHOST
if set. You can also specify the master host name on the command line using the
-h
option.
$PGPORT
if set. You can also specify the master port on the command line using the
-p
option.
Required. Begins the load specification section. A GPLOAD
specification must have an INPUT
and an OUTPUT
section defined.
gpload
will start one or more instances of the
gpfdist file distribution program on the current host and create the required external table definition(s) in Greenplum Database that point to the source data. Note that the host from which you run
gpload
must be accessible over the network by all Greenplum Database hosts (master and segments).
SOURCE
block of an
INPUT
specification defines the location of a source file. An
INPUT
section can have more than one
SOURCE
block defined. Each
SOURCE
block defined corresponds to one instance of the
gpfdist file distribution program that will be started on the local machine. Each
SOURCE
block defined must have a
FILE
specification.
For more information about using the gpfdist
parallel file server and single and multiple gpfdist
instances, see Loading and Unloading Data.
gpload
is running. If this machine is configured with multiple network interface cards (NICs), you can specify the host name or IP of each individual NIC to allow network traffic to use all NICs simultaneously. The default is to use the local machine's primary host name or IP only.
PORT_RANGE
to select an available port from the specified range. If both
PORT
and
PORT_RANGE
are defined, then
PORT
takes precedence. If neither
PORT
or
PORT_RANGE
are defined, the default is to select an available port between 8000 and 9000.
If multiple host names are declared in LOCAL_HOSTNAME
, this port number is used for all hosts. This configuration is desired if you want to use all NICs to load the same file or set of files in a given directory location.
PORT
to supply a range of port numbers from which
gpload
can choose an available port for this instance of the
gpfdist file distribution program.
If the files are compressed using gzip
or bzip2
(have a .gz
or .bz2
file extension), the files will be uncompressed automatically (provided that gunzip
or bunzip2
is in your path).
When specifying which source files to load, you can use the wildcard character (*
) or other C-style pattern matching to denote multiple files. The files specified are assumed to be relative to the current directory from which gpload
is run (or you can declare an absolute path).
SSL
is set to true,
gpload
starts the
gpfdist
server with the
--ssl
option and uses the
gpfdists://
protocol.
true
; cannot be specified when SSL is
false
or unspecified. The location specified in
CERTIFICATES_PATH
must contain the following files:
server.crt
server.key
root.crt
The root directory (/
) cannot be specified as CERTIFICATES_PATH
.
gpload
resolve hostnames to the fully qualified domain name (FQDN) or the local hostname. If the value is set to
true
, names are resolved to the FQDN. If the value is set to
false
, resolution is to the local hostname. The default is
false
.
A fully qualified domain name might be required in some situations. For example, if the Greenplum Database system is in a different domain than an ETL application that is being accessed by gpload
.
field\_name:data\_type
. The
DELIMITER
character in the source file is what separates two data value fields (columns). A row is determined by a line feed character (
0x0a
).
If the input COLUMNS
are not specified, then the schema of the output TABLE
is implied, meaning that the source data must have the same column order, number of columns, and data format as the target table.
The default source-to-target mapping is based on a match of column names as defined in this section and the column names in the target TABLE
. This default mapping can be overridden using the MAPPING
section.
gpload
. For information about XML transformations, see "Loading and Unloading Data" in the
Greenplum Database Administrator Guide.
TRANSFORM
is specified. Specifies the location of the transformation configuration file that is specified in the
TRANSFORM
parameter, above.
line too long
error message occurs). Should not be used otherwise as it increases resource allocation. Valid range is 32K to 256MB. The upper limit is 1MB on Windows systems.
TEXT
) or comma separated values (
CSV
) format. Defaults to
TEXT
if not specified. For more information about the format of the source data, see
Loading and Unloading Data.
"\x1B"
or
"\u001B"
. The escape string syntax,
E'<character-code>'
, is also supported for non-printable characters. The ASCII or unicode character must be enclosed in single quotes. For example:
E'\x1B'
or
E'\u001B'
.
\n
,
\t
,
\100
, and so on) and for escaping data characters that might otherwise be taken as row or column delimiters. Make sure to choose an escape character that is not used anywhere in your actual column data. The default escape character is a \ (backslash) for text-formatted files and a
"
(double quote) for csv-formatted files, however it is possible to specify another character to represent an escape. It is also possible to deactivate escaping in text-formatted files by specifying the value
'OFF'
as the escape value. This is very useful for data such as text-formatted web log data that has many embedded backslashes that are not intended to be escapes.
If not specified, Greenplum Database detects the newline type by examining the first row of data that it receives, and uses the first newline type that it encounters.
\N
(backslash-N) in
TEXT
mode, and an empty value with no quotations in
CSV
mode. You might prefer an empty string even in
TEXT
mode for cases where you do not want to distinguish nulls from empty strings. Any source data item that matches this string will be considered a null value.
false
. When reading a row of data that has missing trailing field values (the row of data has missing data fields at the end of a line or row), Greenplum Database returns an error.
If the value is true
, when reading a row of data that has missing trailing field values, the values are set to NULL
. Blank rows, fields with a NOT NULL
constraint, and trailing delimiters on a line will still report an error.
See the FILL MISSING FIELDS
clause of the CREATE EXTERNAL TABLE command.
FORMAT
is
CSV
. Specifies the quotation character for
CSV
mode. The default is double-quote (
"
).
'SQL_ASCII'
), an integer encoding number, or
'DEFAULT'
to use the default client encoding. If not specified, the default client encoding is used. For information about supported character sets, see the
Greenplum Database Reference Guide.
NoteIf you change the
ENCODING
value in an existinggpload
control file, you must manually drop any external tables that were creating using the previousENCODING
configuration.gpload
does not drop and recreate external tables to use the newENCODING
ifREUSE_TABLES
is set totrue
.
ERROR_LIMIT
is declared. Value is either
true
or
false
. The default value is
false
. If the value is
true
, rows with formatting errors are logged internally when running in single row error isolation mode. You can examine formatting errors with the Greenplum Database built-in SQL function
gp_read_error_log('<table_name>')
. If formatting errors are detected when loading data,
gpload
generates a warning message with the name of the table that contains the error information similar to this message.
<timestamp>|WARN|1 bad row, please use GPDB built-in function gp_read_error_log('table-name')
to access the detailed error row
: If LOG_ERRORS: true
is specified, REUSE_TABLES: true
must be specified to retain the formatting errors in Greenplum Database error logs. If REUSE_TABLES: true
is not specified, the error information is deleted after the gpload
operation. Only summary information about formatting errors is returned. You can delete the formatting errors from the error logs with the Greenplum Database function gp_truncate_error_log()
.
NoteWhen
gpfdist
reads data and encounters a data formatting error, the error message includes a row number indicating the location of the formatting error.gpfdist
attempts to capture the row that contains the error. However,gpfdist
might not capture the exact row for some formatting errors.
For more information about handling load errors, see "Loading and Unloading Data" in the Greenplum Database Administrator Guide. For information about the gp_read_error_log()
function, see the CREATE EXTERNAL TABLE command.
gpload
.
The default is to use the Greenplum Database search_path
.
: SCHEMA
Required when EXTERNAL
is declared. The name of the schema of the external table. If the schema does not exist, an error is returned.
If %
(percent character) is specified, the schema of the table name specified by TABLE
in the OUTPUT
section is used. If the table name does not specify a schema, the default schema is used.
INSERT
if not specified. There are three available load modes:
INSERT - Loads data into the target table using the following method:
INSERT INTO <target_table> SELECT * FROM <input_data>;
UPDATE - Updates the UPDATE_COLUMNS
of the target table where the rows have MATCH_COLUMNS
attribute values equal to those of the input data, and the optional UPDATE_CONDITION
is true. UPDATE
is not supported if the target table column name is a reserved keyword, has capital letters, or includes any character that requires quotes (" ") to identify the column.
MERGE - Inserts new rows and updates the UPDATE_COLUMNS
of existing rows where attribute values are equal to those of the input data, and the optional MATCH_COLUMNS
is true. New rows are identified when the MATCH_COLUMNS
value in the source data does not have a corresponding value in the existing data of the target table. In those cases, the entire row from the source file is inserted, not only the MATCH
and UPDATE
columns. If there are multiple new MATCH_COLUMNS
values that are the same, only one new row for that value will be inserted. Use UPDATE_CONDITION
to filter out the rows to discard. MERGE
is not supported if the target table column name is a reserved keyword, has capital letters, or includes any character that requires quotes (" ") to identify the column.
MODE
is
UPDATE
or
MERGE
. Specifies the column(s) to use as the join condition for the update. The attribute value in the specified target column(s) must be equal to that of the corresponding source data column(s) in order for the row to be updated in the target table.
MODE
is
UPDATE
or
MERGE
. Specifies the column(s) to update for the rows that meet the
MATCH_COLUMNS
criteria and the optional
UPDATE_CONDITION
.
WHERE
clause) that must be met in order for a row in the target table to be updated.
COLUMNS
section and the column names of the target
TABLE
. A mapping is specified as either:
<target_column_name>: <source_column_name>
or
<target_column_name>: '<expression>'
Where <expression> is any expression that you would specify in the SELECT
list of a query, such as a constant value, a column reference, an operator invocation, a function call, and so on.
Optional. Specifies operations to run prior to the load operation. Right now the only preload operation is TRUNCATE
.
gpload
will remove all rows in the target table prior to loading it. Default is false.
gpload
will not drop the external table objects and staging table objects it creates. These objects will be reused for future load operations that use the same load specifications. This improves performance of trickle loads (ongoing small loads to the same target table).
If LOG_ERRORS: true
is specified, REUSE_TABLES: true
must be specified to retain the formatting errors in Greenplum Database error logs. If REUSE_TABLES: true
is not specified, formatting error information is deleted after the gpload
operation.
If the <external_table_name> exists, the utility uses the existing table. The utility returns an error if the table schema does not match the OUTPUT
table schema.
gpload
operation. The external table is used by
gpfdist
.
REUSE_TABLES: true
must also specified. If
REUSE_TABLES
is false or not specified,
STAGING_TABLE
is ignored. By default,
gpload
creates a temporary external table with a randomly generated name.
If external_table_name contains a period (.), gpload
returns an error. If the table exists, the utility uses the table. The utility returns an error if the existing table schema does not match the OUTPUT
table schema.
The utility uses the value of SCHEMA in the EXTERNAL
section as the schema for <external_table_name>. If the SCHEMA
value is %
, the schema for <external_table_name> is the same as the schema of the target table, the schema of TABLE in the OUTPUT
section.
If SCHEMA
is not set, the utility searches for the table (using the schemas in the database search_path
). If the table is not found, external_table_name is created in the default PUBLIC
schema.
gpload
creates the staging table using the distribution key(s) of the target table as the distribution key(s) for the staging table. If the target table was created DISTRIBUTED RANDOMLY
, gpload
uses MATCH_COLUMNS
as the staging table's distribution key(s).
gpload
only searches the database for matching external table objects when reusing external tables. The utility does not check the external table column names and column types in the catalog table
pg_attribute
to ensure that the table can be used for a
gpload
operation. Set the value to true to improve
gpload
performance when reusing external table objects and the database catalog table
pg_attribute
contains a large number of rows. The utility returns an error and quits if the column definitions are not compatible.
The default value is false, the utility checks the external table definition column names and column types.
REUSE_TABLES: true
must also specified. If REUSE_TABLES
is false or not specified and FAST_MATCH: true
is specified, gpload
returns a warning message.
Optional. Defines SQL commands to run before and/or after the load operation. You can specify multiple BEFORE
and/or AFTER
commands. List commands in the order of desired execution.
Log files output by gpload
have the following format:
<timestamp>|<level>|<message>
Where <timestamp> takes the form: YYYY-MM-DD HH:MM:SS
, level is one of DEBUG
, LOG
, INFO
, ERROR
, and message is a normal text message.
Some INFO
messages that may be of interest in the log files are (where # corresponds to the actual number of seconds, units of data, or failed rows):
INFO|running time: <#.##> seconds
INFO|transferred <#.#> kB of <#.#> kB.
INFO|gpload succeeded
INFO|gpload succeeded with warnings
INFO|gpload failed
INFO|1 bad row
INFO|<#> bad rows
If your database object names were created using a double-quoted identifier (delimited identifier), you must specify the delimited name within single quotes in the gpload
control file. For example, if you create a table as follows:
CREATE TABLE "MyTable" ("MyColumn" text);
Your YAML-formatted gpload
control file would refer to the above table and column names as follows:
- COLUMNS:
- '"MyColumn"': text
OUTPUT:
- TABLE: public.'"MyTable"'
If the YAML control file contains the ERROR_TABLE
element that was available in Greenplum Database 4.3.x, gpload
logs a warning stating that ERROR_TABLE
is not supported, and load errors are handled as if the LOG_ERRORS
and REUSE_TABLE
elements were set to true
. Rows with formatting errors are logged internally when running in single row error isolation mode.
Run a load job as defined in my_load.yml
:
gpload -f my_load.yml
Example load control file:
---
VERSION: 1.0.0.1
DATABASE: ops
USER: gpadmin
HOST: mdw-1
PORT: 5432
GPLOAD:
INPUT:
- SOURCE:
LOCAL_HOSTNAME:
- etl1-1
- etl1-2
- etl1-3
- etl1-4
PORT: 8081
FILE:
- /var/load/data/*
- COLUMNS:
- name: text
- amount: float4
- category: text
- descr: text
- date: date
- FORMAT: text
- DELIMITER: '|'
- ERROR_LIMIT: 25
- LOG_ERRORS: true
OUTPUT:
- TABLE: payables.expenses
- MODE: INSERT
PRELOAD:
- REUSE_TABLES: true
SQL:
- BEFORE: "INSERT INTO audit VALUES('start', current_timestamp)"
- AFTER: "INSERT INTO audit VALUES('end', current_timestamp)"