Extracts a database into a single script file or other archive file.
pg_dump [<connection-option> ...] [<dump_option> ...] [<dbname>]
pg_dump -? | --help
pg_dump -V | --version
pg_dump
is a standard PostgreSQL utility for backing up a database, and is also supported in Greenplum Database. It creates a single (non-parallel) dump file. For routine backups of Greenplum Database, it is better to use the Greenplum Database backup utility, gpbackup, for the best performance.
Use pg_dump
if you are migrating your data to another database vendor's system, or to another Greenplum Database system with a different segment configuration (for example, if the system you are migrating to has greater or fewer segment instances). To restore, you must use the corresponding pg_restore utility (if the dump file is in archive format), or you can use a client program such as psql (if the dump file is in plain text format).
Since pg_dump
is compatible with regular PostgreSQL, it can be used to migrate data into Greenplum Database. The pg_dump
utility in Greenplum Database is very similar to the PostgreSQL pg_dump
utility, with the following exceptions and limitations:
pg_dump
to backup a Greenplum Database database, keep in mind that the dump operation can take a long time (several hours) for very large databases. Also, you must make sure you have sufficient disk space to create the dump file.--gp-syntax
command-line option to include the DISTRIBUTED BY
clause in CREATE TABLE
statements. This ensures that Greenplum Database table data is distributed with the correct distribution key columns upon restore.pg_dump
makes consistent backups even if the database is being used concurrently. pg_dump
does not block other users accessing the database (readers or writers).
When used with one of the archive file formats and combined with pg_restore
, pg_dump
provides a flexible archival and transfer mechanism. pg_dump
can be used to backup an entire database, then pg_restore
can be used to examine the archive and/or select which parts of the database are to be restored. The most flexible output file formats are the custom format (-Fc
) and the directory format (-Fd
). They allow for selection and reordering of all archived items, support parallel restoration, and are compressed by default. The directory format is the only format that supports parallel dumps.
PGDATABASE
is used. If that is not set, the user name specified for the connection is used.
Dump Options
Dump only the data, not the schema (data definitions). Table data and sequence values are dumped.
--section=data
.
Include large objects in the dump. This is the default behavior except when --schema
, --table
, or --schema-only
is specified. The -b
switch is only useful add large objects to dumps where a specific schema or table has been requested. Note that blobs are considered data and therefore will be included when --data-only
is used, but not when --schema-only
is.
NoteGreenplum Database does not support the PostgreSQL large object facility for streaming user data that is stored in large-object structures.
Exclude large objects in the dump.
When both -b
and -B
are specified, pg_dump
outputs large objects, when data is being dumped, see the -b
documentation.
NoteGreenplum Database does not support the PostgreSQL large object facility for streaming user data that is stored in large-object structures.
DROP
commands are added to the DDL dump output files so that when you use those files to do a restore, the
DROP
commands are run prior to the
CREATE
commands. This option is only meaningful for the plain-text format. For the archive formats, you may specify the option when you call
pg_restore.
--clean
is also specified, the script drops and recreates the target database before reconnecting to it. This option is only meaningful for the plain-text format. For the archive formats, you may specify the option when you call
pg_restore.
--create
, the output also includes the database's comment if any, and any configuration variable settings that are specific to this database, that is, any
ALTER DATABASE ... SET ...
and
ALTER ROLE ... IN DATABASE ... SET ...
commands that mention this database. Access privileges for the database itself are also dumped, unless
--no-acl
is specified.
PGCLIENTENCODING
environment variable to the desired dump encoding.)
pg_dump
and must not exist before.
pg_restore
. This will create a directory with one file for each table and blob being dumped, plus a so-called Table of Contents file describing the dumped objects in a machine-readable format that pg_restore
can read. A directory format archive can be manipulated with standard Unix tools; for example, files in an uncompressed archive can be compressed with the gzip
tool. This format is compressed by default and also supports parallel dumps.Run the dump in parallel by dumping <njobs> tables simultaneously. This option reduces the time of the dump but it also increases the load on the database server. You can only use this option with the directory output format because this is the only output format where multiple processes can write their data at the same time.
NoteParallel dumps using
pg_dump
are parallelized only on the query dispatcher (coordinator) node, not across the query executor (segment) nodes as is the case when you usegpbackup
.
pg_dump
will open <njobs> + 1 connections to the database, so make sure your max_connections setting is high enough to accommodate all connections.
Requesting exclusive locks on database objects while running a parallel dump could cause the dump to fail. The reason is that the pg_dump
coordinator process requests shared locks on the objects that the worker processes are going to dump later in order to make sure that nobody deletes them and makes them go away while the dump is running. If another client then requests an exclusive lock on a table, that lock will not be granted but will be queued waiting for the shared lock of the coordinator process to be released. Consequently, any other access to the table will not be granted either and will queue after the exclusive lock request. This includes the worker process trying to dump the table. Without any precautions this would be a classic deadlock situation. To detect this conflict, the pg_dump
worker process requests another shared lock using the NOWAIT
option. If the worker process is not granted this shared lock, somebody else must have requested an exclusive lock in the meantime and there is no way to continue with the dump, so pg_dump
has no choice but to cancel the dump.
For a consistent backup, the database server needs to support synchronized snapshots, a feature that was introduced in Greenplum Database 6.0. With this feature, database clients can ensure they see the same data set even though they use different connections. pg_dump -j
uses multiple database connections; it connects to the database once with the coordinator process and once again for each worker job. Without the synchronized snapshot feature, the different worker jobs wouldn't be guaranteed to see the same data in each connection, which could lead to an inconsistent backup.
--no-synchronized-snapshots
parameter when running
pg_dump -j
against a pre-6.0 Greenplum Database server.
Dump only schemas matching the schema pattern; this selects both the schema itself, and all its contained objects. When this option is not specified, all non-system schemas in the target database will be dumped. Multiple schemas can be selected by writing multiple -n
switches. Also, the pattern parameter is interpreted as a pattern according to the same rules used by psql
's \d
commands, so multiple schemas can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards.
When -n
is specified, pg_dump
makes no attempt to dump any other database objects that the selected schema(s) may depend upon. Therefore, there is no guarantee that the results of a specific-schema dump can be successfully restored by themselves into a clean database.
NoteNon-schema objects such as blobs are not dumped when
-n
is specified. You can add blobs back to the dump with the--blobs
switch.
-n
.
-N
can be specified more than once to exclude schemas matching any of several patterns. When both
-n
and
-N
are specified, the behavior is to dump just the schemas that match at least one
-n
switch but no
-N
switches. If
-N
appears without
-n
, then schemas matching
-N
are excluded from what is otherwise a normal dump.
pg_dump
issues
ALTER OWNER
or
SET SESSION AUTHORIZATION
statements to set ownership of created database objects. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give that user ownership of all the objects, specify
-O
. This option is only meaningful for the plain-text format. For the archive formats, you may specify the option when you call
pg_restore.
Dump only the object definitions (schema), not data.
This option is the inverse of --data-only
. It is similar to, but for historical reasons not identical to, specifying --section=pre-data --section=post-data
.
(Do not confuse this with the --schema
option, which uses the word "schema" in a different meaning.)
--exclude-table-data
.
Specify the superuser user name to use when deactivating triggers. This is relevant only if --disable-triggers
is used. It is better to leave this out, and instead start the resulting script as a superuser.
NoteGreenplum Database does not support user-defined triggers.
Dump only tables (or views or materialized views or sequences or foreign tables) matching the pattern. Specify the table in the format schema.table
.
-t
switches. Also, the pattern parameter is interpreted as a pattern according to the same rules used by
psql
's
\d
commands, so multiple tables can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards.
The -n
and -N
switches have no effect when -t
is used, because tables selected by -t
will be dumped regardless of those switches, and non-table objects will not be dumped.
NoteWhen
-t
is specified,pg_dump
makes no attempt to dump any other database objects that the selected table(s) may depend upon. Therefore, there is no guarantee that the results of a specific-table dump can be successfully restored by themselves into a clean database.
Also, -t
cannot be used to specify a child table of a partitioned table. To dump a partitioned table, you must specify the root partitioned table name.
-t
.
-T
can be specified more than once to exclude tables matching any of several patterns. When both
-t
and
-T
are given, the behavior is to dump just the tables that match at least one
-t
switch but no
-T
switches. If
-T
appears without
-t
, then tables matching
-T
are excluded from what is otherwise a normal dump.
pg_dump
to output detailed object comments and start/stop times to the dump file, and progress messages to standard error.
pg_dump
version and exit.
GRANT/REVOKE
commands).
Specify the compression level to use. Zero means no compression. For the custom archive format, this specifies compression of individual table-data segments, and the default is to compress at a moderate level.
gzip
; but the default is not to compress. The tar archive format currently does not support compression at all.
INSERT
commands with explicit column names
(INSERT INTO <table> (<column>, ...) VALUES ...)
. This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL-based databases. However, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents.
This option is relevant only when creating a data-only dump. It instructs pg_dump
to include commands to temporarily deactivate triggers on the target tables while the data is reloaded. Use this if you have triggers on the tables that you do not want to invoke during data reload. The commands emitted for --disable-triggers
must be done as superuser. So, you should also specify a superuser name with -S
, or preferably be careful to start the resulting script as a superuser. This option is only meaningful for the plain-text format. For the archive formats, you may specify the option when you call pg_restore.
NoteGreenplum Database does not support user-defined triggers.
This option is relevant only when dumping the contents of a table which has row security. By default, pg_dump
sets row_security
to off, to ensure that all data is dumped from the table. If the user does not have sufficient privileges to bypass row security, then an error is thrown. This parameter instructs pg_dump
to set row_security
to on instead, allowing the user to dump the parts of the contents of the table that they have access to.
INSERT
format, as the
COPY FROM
during restore does not support row security.
-T
or
--exclude-table
, except that it also excludes any partitions of child tables that inherit from the table(s) matching the <table> name.
Do not dump data for any tables matching the pattern. The pattern is interpreted according to the same rules as for -t
. --exclude-table-data
can be specified more than once to exclude tables matching any of several patterns. This option is useful when you need the definition of a particular table even though you do not need the data in it.
--schema-only
.
--exclude-table-data
, except that it also excludes any partitions of child tables that inherit from the table(s) matching the <table> name.
extra_float_digits
when dumping floating-point data, instead of the maximum available precision. Routine dumps made for backup purposes should not use this option.
IF EXISTS
clause) when cleaning database objects. This option is not valid unless
--clean
is also specified.
INSERT
commands (rather than
COPY
). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL-based databases. However, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents. Note that the restore may fail altogether if you have rearranged column order. The
--column-inserts
option is safe against column order changes, though even slower.
COPY
or
INSERT
statements target the root of the partitioning hierarchy that contains it, rather than the partition itself. This causes the appropriate partition to be re-determined for each row when the data is loaded. This may be useful when restoring data on a server where rows do not always fall into the same partitions as they did on the original server. That could happen, for example, if the partitioning column is of type
text
and the two systems have different definitions of the collation used to sort the partitioning column.
SET statement_timeout
. Specify timeout as a number of milliseconds.
pg_dump
waits for all files to be written safely to disk. This option causes
pg_dump
to return without waiting, which is faster, but could leave the dump corrupt if there is a subsequent operating system crash. This option is generally useful for testing but should not be used when dumping data from a production installation.
pg_dump -j
against a pre-6.0 Greenplum Database server; see the documentation of the
-j
parameter for more details.
Do not output commands to select tablespaces. With this option, all objects will be created in whichever tablespace is the default during restore.
pg_restore
.
ON CONFLICT DO NOTHING
to
INSERT
commands. This option is not valid unless
--inserts
,
--column-inserts
, or
--rows-per-insert
is also specified.
pg_dump
's, or when the output is intended to be loaded into a server of a different major version. By default,
pg_dump
quotes only identifiers that are reserved words in its own major version. This sometimes results in compatibility issues when dealing with servers of other versions that may have slightly different sets of reserved words. Using
--quote-all-identifiers
prevents such issues, at the price of a harder-to-read dump script.
INSERT
commands (rather than
COPY
). Controls the maximum number of rows per
INSERT
command. The value specified must be a number greater than zero. Any error during restoring will cause only rows that are part of the problematic
INSERT
to be lost, rather than the entire table contents.
Only dump the named section. The section name can be pre-data
, data
, or post-data
. This option can be specified more than once to select multiple sections. The default is to dump all sections.
data
section contains actual table data and sequence values.
post-data
items include definitions of indexes, triggers, rules, and constraints other than validated check constraints.
pre-data
items include all other data definition items.
Use a serializable transaction for the dump, to ensure that the snapshot used is consistent with later database states; but do this by waiting for a point in the transaction stream at which no anomalies can be present, so that there isn't a risk of the dump failing or causing other transactions to roll back with a serialization_failure
.
This option is not beneficial for a dump which is intended only for disaster recovery. It could be useful for a dump used to load a copy of the database for reporting or other read-only load sharing while the original database continues to be updated. Without it the dump may reflect a state which is not consistent with any serial execution of the transactions eventually committed. For example, if batch processing techniques are used, a batch may show as closed in the dump without all of the items which are in the batch appearing.
This option will make no difference if there are no read-write transactions active when pg_dump
is started. If read-write transactions are active, the start of the dump may be delayed for an indeterminate length of time. Once running, performance with or without the switch is the same.
NoteBecause Greenplum Database does not support serializable transactions, the
--serializable-deferrable
option has no effect in Greenplum Database.
Use the specified synchronized snapshot when making a dump of the database.
This option is useful when you need to synchronize the dump with a concurrent session.
Require that each schema (-n
/--schema
) and table (-t
/--table
) qualifier match at least one schema/table in the database to be dumped. Note that if none of the schema/table qualifiers find matches, pg_dump
generates an error even without --strict-names
.
-N
/
--exclude-schema
,
-T
/
--exclude-table
, or
--exclude-table-data
. An exclude pattern failing to match any objects is not considered an error.
-t
or
--table
, except that it also includes any partitions of child tables that inherit from the table(s) matching the <table> name.
SET SESSION AUTHORIZATION
commands instead of
ALTER OWNER
commands to determine object ownership. This makes the dump more standards-compatible, but depending on the history of the objects in the dump, may not restore properly. A dump using
SET SESSION AUTHORIZATION
requires superuser privileges to restore correctly, whereas
ALTER OWNER
requires lesser privileges.
--gp-syntax
to dump Greenplum Database syntax in the
CREATE TABLE
statements. This allows the distribution policy (
DISTRIBUTED BY
or
DISTRIBUTED RANDOMLY
clauses) of a Greenplum Database table to be dumped, which is useful for restoring into other Greenplum Database systems. The default is to include Greenplum Database syntax when connected to a Greenplum Database system, and to exclude it when connected to a regular PostgreSQL system.
Dump the function(s) specified in the oids list of object identifiers.
NoteThis option is provided solely for use by other administration utilities; its use for any other purpose is not recommended or supported. The behavior of the option may change in future releases without notice.
Dump the relation(s) specified in the oids list of object identifiers.
NoteThis option is provided solely for use by other administration utilities; its use for any other purpose is not recommended or supported. The behavior of the option may change in future releases without notice.
pg_dump
command line arguments, and exit.
Connection Options
Specifies the name of the database to connect to. This is equivalent to specifying dbname as the first non-option argument on the command line.
=
sign or starts with a valid URI prefix (
postgresql://
or
postgres://
), it is treated as a conninfo string. See
Connection Strings in the PostgreSQL documentation for more information. Connection strings parameters override any conflicting command line option.
PGHOST
or defaults to
localhost
.
PGPORT
or defaults to
5432
. `
PGUSER
or defaults to the current system role name.
Force a password prompt.
pg_dump
automatically prompts for a password if the server demands password authentication. However,
pg_dump
will waste a connection attempt determining that the server wants a password. In some cases it is worth typing
-W
to avoid the extra connection attempt.
.pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
pg_dump
to issue a
SET ROLE <rolename>
command after connecting to the database. It is useful when the authenticated user (specified by
-U
) lacks privileges needed by
pg_dump
, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows dumps to be made without violating the policy.
always
,
auto
, and
never
.
This utility also uses the environment variables supported by libpq.
If your database cluster has any local additions to the template1
database, be careful to restore the output of pg_dump
into a truly empty database; otherwise you are likely to get errors due to duplicate definitions of the added objects. To make an empty database without any local additions, copy from template0
not template1
, for example:
CREATE DATABASE foo WITH TEMPLATE template0;
When a data-only dump is chosen and the option --disable-triggers
is used, pg_dump
emits commands to deactivate triggers on user tables before inserting the data and commands to re-enable them after the data has been inserted. If the restore is stopped in the middle, the system catalogs may be left in the wrong state.
The dump file produced by pg_dump
does not contain the statistics used by the optimizer to make query planning decisions. Therefore, it is wise to run ANALYZE
after restoring from a dump file to ensure optimal performance.
The database activity of pg_dump
is normally collected by the statistics collector. If this is undesirable, you can set parameter track_counts
to false via PGOPTIONS
or the ALTER USER
command.
Because pg_dump
may be used to transfer data to newer versions of Greenplum Database, the output of pg_dump
can be expected to load into Greenplum Database versions newer than pg_dump
's version. pg_dump
can also dump from Greenplum Database versions older than its own version. However, pg_dump
cannot dump from Greenplum Database versions newer than its own major version; it will refuse to even try, rather than risk making an invalid dump. Also, it is not guaranteed that pg_dump
's output can be loaded into a server of an older major version — not even if the dump was taken from a server of that version. Loading a dump file into an older server may require manual editing of the dump file to remove syntax not understood by the older server. Use of the --quote-all-identifiers
option is recommended in cross-version cases, as it can prevent problems arising from varying reserved-word lists in different Greenplum Database versions.
Dump a database called mydb
into a SQL-script file:
pg_dump mydb > db.sql
To reload such a script into a (freshly created) database named newdb
:
psql -d newdb -f db.sql
Dump a Greenplum Database in tar file format and include distribution policy information:
pg_dump -Ft --gp-syntax mydb > db.tar
To dump a database into a custom-format archive file:
pg_dump -Fc mydb > db.dump
To dump a database into a directory-format archive:
pg_dump -Fd mydb -f dumpdir
To dump a database into a directory-format archive in parallel with 5 worker jobs:
pg_dump -Fd mydb -j 5 -f dumpdir
To reload an archive file into a (freshly created) database named newdb
:
pg_restore -d newdb db.dump
To reload an archive file into the same database it was dumped from, discarding the current contents of that database:
pg_restore -d postgres --clean --create db.dump
To dump a single table named mytab
:
pg_dump -t mytab mydb > db.sql
To dump all tables whose names start with emp
in the detroit
schema, except for the table named employee_log
:
pg_dump -t 'detroit.emp*' -T detroit.employee_log mydb > db.sql
To dump all schemas whose names start with east
or west
and end in gsm
, excluding any schemas whose names contain the word test
:
pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql
The same dump as above, using regular expression notation to consolidate the switches:
pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql
To dump all database objects except for tables whose names begin with ts_
:
pg_dump -T 'ts_*' mydb > db.sql
To specify an upper-case or mixed-case name in -t
and related switches, you need to double-quote the name; else it will be folded to lower case. But double quotes are special to the shell, so they in turn must be quoted. To dump a single table with a mixed-case name, you must specify something like:
pg_dump -t "\"MixedCaseName\"" mydb > mytab.sql