Descriptions of the Greenplum Database server configuration parameters listed alphabetically.
Sets the application name for a client session. For example, if connecting via psql
, this will be set to psql
. Setting an application name allows it to be reported in log messages and statistics views.
Value Range | Default | Set Classifications |
---|---|---|
string | master, session, reload |
This controls whether the array input parser recognizes unquoted NULL as specifying a null array element. By default, this is on, allowing array values containing null values to be entered. Greenplum Database versions before 3.0 did not support null values in arrays, and therefore would treat NULL as specifying a normal array element with the string value 'NULL'.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Maximum time to complete client authentication. This prevents hung clients from occupying a connection indefinitely.
Value Range | Default | Set Classifications |
---|---|---|
Any valid time expression (number and unit) | 1min | local, system, restart |
This controls whether a quote mark can be represented by \' in a string literal. The preferred, SQL-standard way to represent a quote mark is by doubling it ('') but PostgreSQL has historically also accepted \'. However, use of \' creates security risks because in some client character set encodings, there are multibyte characters in which the last byte is numerically equivalent to ASCII \.
Value Range | Default | Set Classifications |
---|---|---|
on (allow \' always) off (reject always) safe_encoding (allow only if client encoding does not allow ASCII \ within a multibyte character) |
safe_encoding | master, session, reload |
Reports the size of a disk block.
Value Range | Default | Set Classifications |
---|---|---|
number of bytes | 32768 | read only |
Specifies the Bonjour broadcast name. By default, the computer name is used, specified as an empty string. This option is ignored if the server was not compiled with Bonjour support.
Value Range | Default | Set Classifications |
---|---|---|
string | unset | master, system, restart |
When set to off, deactivates validation of the function body string during CREATE FUNCTION
. Deactivating validation is occasionally useful to avoid problems such as forward references when restoring function definitions from a dump.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Sets the time interval between optional checks that the client is still connected, while running queries. 0 deactivates connection checks.
Value Range | Default | Set Classifications |
---|---|---|
number of milliseconds | 0 | master, session, reload |
Sets the client-side encoding (character set). The default is to use the same as the database encoding. See Supported Character Sets in the PostgreSQL documentation.
Value Range | Default | Set Classifications |
---|---|---|
character set | UTF8 | master, session, reload |
Controls which message levels are sent to the client. Each level includes all the levels that follow it. The later the level, the fewer messages are sent.
Value Range | Default | Set Classifications |
---|---|---|
DEBUG5 DEBUG4 DEBUG3 DEBUG2 DEBUG1 LOG NOTICE WARNING ERROR FATAL PANIC |
NOTICE | master, session, reload |
INFO
level messages are always sent to the client.
For the Postgres Planner, sets the estimate of the cost of processing each index row during an index scan. This is measured as a fraction of the cost of a sequential page fetch.
Value Range | Default | Set Classifications |
---|---|---|
floating point | 0.005 | master, session, reload |
For the Postgres Planner, sets the estimate of the cost of processing each operator in a WHERE clause. This is measured as a fraction of the cost of a sequential page fetch.
Value Range | Default | Set Classifications |
---|---|---|
floating point | 0.0025 | master, session, reload |
For the Postgres Planner, Sets the estimate of the cost of processing each row during a query. This is measured as a fraction of the cost of a sequential page fetch.
Value Range | Default | Set Classifications |
---|---|---|
floating point | 0.01 | master, session, reload |
Tells the Postgres Planner how many rows are expected to be fetched in a cursor query, thereby allowing the Postgres Planner to use this information to optimize the query plan. The default of 1 means all rows will be fetched.
Value Range | Default | Set Classifications |
---|---|---|
integer | 1 | master, session, reload |
Reports whether checksums are enabled for heap data storage in the database system. Checksums for heap data are enabled or deactivated when the database system is initialized and cannot be changed.
Heap data pages store heap tables, catalog tables, indexes, and database metadata. Append-optimized storage has built-in checksum support that is unrelated to this parameter.
Greenplum Database uses checksums to prevent loading data corrupted in the file system into memory managed by database processes. When heap data checksums are enabled, Greenplum Database computes and stores checksums on heap data pages when they are written to disk. When a page is retrieved from disk, the checksum is verified. If the verification fails, an error is generated and the page is not permitted to load into managed memory.
If the ignore_checksum_failure
configuration parameter has been set to on, a failed checksum verification generates a warning, but the page is allowed to be loaded into managed memory. If the page is then updated, it is flushed to disk and replicated to the mirror. This can cause data corruption to propagate to the mirror and prevent a complete recovery. Because of the potential for data loss, the ignore_checksum_failure
parameter should only be enabled when needed to recover data. See ignore_checksum_failure for more information.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | read only |
Sets the display format for date and time values, as well as the rules for interpreting ambiguous date input values. This variable contains two independent components: the output format specification and the input/output specification for year/month/day ordering.
Value Range | Default | Set Classifications |
---|---|---|
<format>, <date style> where: <format> is ISO, Postgres, SQL, or German <date style> is DMY, MDY, or YMD |
ISO, MDY | master, session, reload |
This enables per-database user names. If on, you should create users as username@dbname. To create ordinary global users, simply append @ when specifying the user name in the client.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | local, system, restart |
The time to wait on a lock before checking to see if there is a deadlock condition. On a heavily loaded server you might want to raise this value. Ideally the setting should exceed your typical transaction time, so as to improve the odds that a lock will be released before the waiter decides to check for deadlock.
Value Range | Default | Set Classifications |
---|---|---|
Any valid time expression (number and unit) | 1s | master, session, reload |
Turns on various assertion checks.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | local, system, restart |
Indents debug output to produce a more readable but much longer output format. client_min_messages or log_min_messages must be DEBUG1 or lower.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
For each query run, prints the resulting parse tree. client_min_messages or log_min_messages must be DEBUG1 or lower.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
For each query run, prints the Greenplum parallel query execution plan. client_min_messages or log_min_messages must be DEBUG1 or lower.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
For each query run, prints the preliminary query plan. client_min_messages or log_min_messages must be DEBUG1 or lower.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
For each query run, prints the query rewriter output. client_min_messages or log_min_messages must be DEBUG1 or lower.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
For each query run, prints the Greenplum query slice plan. client_min_messages or log_min_messages must be DEBUG1 or lower.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
Sets the default statistics sampling target (the number of values that are stored in the list of common values) for table columns that have not had a column-specific target set via ALTER TABLE SET STATISTICS
. Larger values may improve the quality of the Postgres Planner estimates.
Value Range | Default | Set Classifications |
---|---|---|
0 > Integer > 10000 | 100 | master, session, reload |
The default tablespace in which to create objects (tables and indexes) when a CREATE
command does not explicitly specify a tablespace.
Value Range | Default | Set Classifications |
---|---|---|
name of a tablespace | unset | master, session, reload |
Selects the text search configuration that is used by those variants of the text search functions that do not have an explicit argument specifying the configuration. See Using Full Text Search for further information. The built-in default is pg_catalog.simple
, but initdb
will initialize the configuration file with a setting that corresponds to the chosen lc_ctype
locale, if a configuration matching that locale can be identified.
Value Range | Default | Set Classifications |
---|---|---|
The name of a text search configuration. | pg_catalog.simple |
master, session, reload |
When running at the SERIALIZABLE
isolation level, a deferrable read-only SQL transaction may be delayed before it is allowed to proceed. However, once it begins running it does not incur any of the overhead required to ensure serializability; so serialization code will have no reason to force it to abort because of concurrent updates, making this option suitable for long-running read-only transactions.
This parameter controls the default deferrable status of each new transaction. It currently has no effect on read-write transactions or those operating at isolation levels lower than SERIALIZABLE
. The default is off
.
NoteSetting
default_transaction_deferrable
toon
has no effect in Greenplum Database. Only read-only,SERIALIZABLE
transactions can be deferred. However, Greenplum Database does not support theSERIALIZABLE
transaction isolation level. See SET TRANSACTION.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
Controls the default isolation level of each new transaction. Greenplum Database treats read uncommitted
the same as read committed
, and treats serializable
the same as repeatable read
.
Value Range | Default | Set Classifications |
---|---|---|
read committed read uncommitted repeatable read serializable |
read committed | master, session, reload |
Controls the default read-only status of each new transaction. A read-only SQL transaction cannot alter non-temporary tables.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
The maximum number of retries attempted by Greenplum Database during the second phase of a two phase commit. When one or more segments cannot successfully complete the commit phase, the master retries the commit a maximum of dtx_phase2_retry_count
times. If the commit continues to fail on the last retry attempt, the master generates a PANIC.
When the network is unstable, the master may be unable to connect to one or more segments; increasing the number of two phase commit retries may improve high availability of Greenplum when the master encounters transient network issues.
Value Range | Default | Set Classifications |
---|---|---|
0 - INT_MAX |
10 | master, system, restart |
If a dynamically loadable module needs to be opened and the file name specified in the CREATE FUNCTION
or LOAD
command does not have a directory component (i.e. the name does not contain a slash), the system will search this path for the required file. The compiled-in PostgreSQL package library directory is substituted for $libdir. This is where the modules provided by the standard PostgreSQL distribution are installed.
Value Range | Default | Set Classifications |
---|---|---|
a list of absolute directory paths separated by colons | $libdir | local, system, reload |
Sets the assumption about the effective size of the disk cache that is available to a single query for the Postgres Planner. This is factored into estimates of the cost of using an index; a higher value makes it more likely index scans will be used, a lower value makes it more likely sequential scans will be used. When setting this parameter, you should consider both Greenplum Database's shared buffers and the portion of the kernel's disk cache that will be used for data files (though some data might exist in both places). Take also into account the expected number of concurrent queries on different tables, since they will have to share the available space. This parameter has no effect on the size of shared memory allocated by a Greenplum server instance, nor does it reserve kernel disk cache; it is used only for estimation purposes.
Set this parameter to a number of block_size blocks (default 32K) with no units; for example, 262144
for 8GB. You can also directly specify the size of the effective cache; for example, '1GB'
specifies a size of 32768 blocks. The gpconfig
utility and SHOW
command display the effective cache size value in units such as 'GB', 'MB', or 'kB'.
Value Range | Default | Set Classifications |
---|---|---|
1 - INT_MAX or number and unit | 524288 (16GB) | master, session, reload |
Activates or deactivates the use of bitmap-scan plan types by the Postgres Planner. Note that this is different than a Bitmap Index Scan. A Bitmap Scan means that indexes will be dynamically converted to bitmaps in memory when appropriate, giving faster index performance on complex queries against very large tables. It is used when there are multiple predicates on different indexed columns. Each bitmap per column can be compared to create a final list of selected tuples.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Activates or deactivates the use of group aggregation plan types by the Postgres Planner.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Activates or deactivates the use of hash aggregation plan types by the Postgres Planner.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Activates or deactivates the use of hash-join plan types by the Postgres Planner.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Activates or deactivates the deprecated implicit conversion of a string with the YYYYMMDDHH24MISS timestamp format to a valid date/time type.
The default value is off
. When this parameter is set to on
, Greenplum Database converts a string with the timestamp format YYYYMMDDHH24MISS into a valid date/time type. You may require this conversion when loading data from Greenplum Database 5.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
Activates or deactivates the use of index-scan plan types by the Postgres Planner.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Activates or deactivates the use of merge-join plan types by the Postgres Planner. Merge join is based on the idea of sorting the left- and right-hand tables into order and then scanning them in parallel. So, both data types must be capable of being fully ordered, and the join operator must be one that can only succeed for pairs of values that fall at the 'same place' in the sort order. In practice this means that the join operator must behave like equality.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
Activates or deactivates the use of nested-loop join plans by the Postgres Planner. It's not possible to suppress nested-loop joins entirely, but turning this variable off discourages the Postgres Planner from using one if there are other methods available.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
Activates or deactivates the use of sequential scan plan types by the Postgres Planner. It's not possible to suppress sequential scans entirely, but turning this variable off discourages the Postgres Planner from using one if there are other methods available.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Activates or deactivates the use of explicit sort steps by the Postgres Planner. It's not possible to suppress explicit sorts entirely, but turning this variable off discourages the Postgres Planner from using one if there are other methods available.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Activates or deactivates the use of tuple identifier (TID) scan plan types by the Postgres Planner.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
When on, a warning is issued if a backslash (\) appears in an ordinary string literal ('...' syntax). Escape string syntax (E'...') should be used for escapes, because in future versions, ordinary strings will have the SQL standard-conforming behavior of treating backslashes literally.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Determines whether EXPLAIN VERBOSE uses the indented or non-indented format for displaying detailed query-tree dumps.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Adjusts the number of digits displayed for floating-point values, including float4, float8, and geometric data types. The parameter value is added to the standard number of digits. The value can be set as high as 3, to include partially-significant digits; this is especially useful for dumping float data that needs to be restored exactly. Or it can be set negative to suppress unwanted digits.
Value Range | Default | Set Classifications |
---|---|---|
integer (-15 to 3) | 0 | master, session, reload |
The Postgres Planner will merge sub-queries into upper queries if the resulting FROM list would have no more than this many items. Smaller values reduce planning time but may yield inferior query plans.
Value Range | Default | Set Classifications |
---|---|---|
1-n | 20 | master, session, reload |
When adding a column to an append-optimized, column-oriented table with the ALTER TABLE command, this parameter controls whether the table's data compression parameters for a column (compresstype
, compresslevel
, and blocksize
) can be inherited from the table values. The default is off
, the table's data compression settings are not considered when adding a column to the table. If the value is on
, the table's settings are considered.
When you create an append-optimized column-oriented table, you can set the table's data compression parameters compresstype
, compresslevel
, and blocksize
for the table in the WITH
clause. When you add a column, Greenplum Database sets each data compression parameter based on one of the following settings, in order of preference.
ALTER TABLE
command ENCODING
clause.on
, the table's data compression setting specified in the WITH
clause when the table was created. Otherwise, the table's data compression setting is ignored.You must specify --skipvalidation
when modifying this parameter as it is a restricted configuration parameter. Use extreme caution when setting configuration parameters with this option. For example:
gpconfig --skipvalidation -c gp_add_column_inherits_table_setting -v on
For information about the data storage compression parameters, see CREATE TABLE.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
Enables the selectivity of NULL tests over outer joins.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Enables compacting segment files during VACUUM
commands. When deactivated, VACUUM
only truncates the segment files to the EOF value, as is the current behavior. The administrator may want to deactivate compaction in high I/O load situations or low space situations.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Sets the minimum number of available segment files for inserts. If the number of available segment files falls below this limit, compaction is skipped to prevent excessive segments entering the AWAITING_DROP
state. This GUC protects against segment file exhaustion. VACUUM
, when run with concurrent transactions having older snapshots, can retain segment files in the AWAITING_DROP
state.
Value Range | Default | Set Classifications |
---|---|---|
0 to 127 | 10 | coordinator, session, reload |
Sets the minimum number of available segment files for inserts. If the number of available segment files falls below this limit, compaction is skipped to prevent excessive segments entering the AWAITING_DROP
state. This GUC protects against segment file exhaustion. VACUUM
, when run with concurrent transactions having older snapshots, can retain segment files in the AWAITING_DROP
state.
Value Range | Default | Set Classifications |
---|---|---|
0 to 127 | 10 | coordinator, session, reload |
Specifies the threshold ratio (as a percentage) of hidden rows to total rows that triggers compaction of the segment file when VACUUM is run without the FULL option (a lazy vacuum). If the ratio of hidden rows in a segment file on a segment is less than this threshold, the segment file is not compacted, and a log message is issued.
Value Range | Default | Set Classifications |
---|---|---|
integer (%) | 10 | master, session, reload |
The gp_autostats_allow_nonowner
server configuration parameter determines whether or not to allow Greenplum Database to trigger automatic statistics collection when a table is modified by a non-owner.
The default value is false
; Greenplum Database does not trigger automatic statistics collection on a table that is updated by a non-owner.
When set to true
, Greenplum Database will also trigger automatic statistics collection on a table when:
gp_autostats_mode=on_change
and the table is modified by a non-owner.gp_autostats_mode=on_no_stats
and the first user to INSERT
or COPY
into the table is a non-owner.The gp_autostats_allow_nonowner
configuration parameter can be changed only by a superuser.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | master, session, reload, superuser |
Specifies the mode for triggering automatic statistics collection with ANALYZE
. The on_no_stats
option triggers statistics collection for CREATE TABLE AS SELECT
, INSERT
, or COPY
operations on any table that has no existing statistics.
The on_change
option triggers statistics collection only when the number of rows affected exceeds the threshold defined by gp_autostats_on_change_threshold
. Operations that can trigger automatic statistics collection with on_change
are:
CREATE TABLE AS SELECT
UPDATE
DELETE
INSERT
COPY
Default is on_no_stats
.
NoteFor partitioned tables, automatic statistics collection is not triggered if data is inserted from the top-level parent table of a partitioned table.
Automatic statistics collection is triggered if data is inserted directly in a leaf table (where the data is stored) of the partitioned table. Statistics are collected only on the leaf table.
Value Range | Default | Set Classifications |
---|---|---|
none on_change on_no_stats |
on_no_ stats | master, session, reload |
Specifies the mode for triggering automatic statistics collection with ANALYZE
for statements in procedural language functions. The none
option deactivates statistics collection. The on_no_stats
option triggers statistics collection for CREATE TABLE AS SELECT
, INSERT
, or COPY
operations that are run in functions on any table that has no existing statistics.
The on_change
option triggers statistics collection only when the number of rows affected exceeds the threshold defined by gp_autostats_on_change_threshold
. Operations in functions that can trigger automatic statistics collection with on_change
are:
CREATE TABLE AS SELECT
UPDATE
DELETE
INSERT
COPY
Value Range | Default | Set Classifications |
---|---|---|
none on_change on_no_stats |
none | master, session, reload |
Specifies the threshold for automatic statistics collection when gp_autostats_mode
is set to on_change
. When a triggering table operation affects a number of rows exceeding this threshold, ANALYZE
is added and statistics are collected for the table.
Value Range | Default | Set Classifications |
---|---|---|
integer | 2147483647 | master, session, reload |
When a user starts a session with Greenplum Database and issues a query, the system creates groups or 'gangs' of worker processes on each segment to do the work. After the work is done, the segment worker processes are destroyed except for a cached number which is set by this parameter. A lower setting conserves system resources on the segment hosts, but a higher setting may improve performance for power-users that want to issue many complex queries in a row.
Value Range | Default | Set Classifications |
---|---|---|
integer > 0 | 5 | master, session, reload |
Shows how many commands the master has received from the client. Note that a single SQLcommand might actually involve more than one command internally, so the counter may increment by more than one for a single query. This counter also is shared by all of the segment processes working on the command.
Value Range | Default | Set Classifications |
---|---|---|
integer > 0 | 1 | read only |
Timeout for sending data to unresponsive Greenplum Database user clients during query processing. A value of 0 deactivates the timeout, Greenplum Database waits indefinitely for a client. When the timeout is reached, the query is cancelled with this message:
Could not send data to client: Connection timed out.
Value Range | Default | Set Classifications |
---|---|---|
number of seconds | 3600 (1 hour) | master, system, reload |
The local content id if a segment.
Value Range | Default | Set Classifications |
---|---|---|
integer | read only |
Beginning in version 6.21.0, the Resource Groups implementation was changed to calculate segment memory using gp_segment_configuration.hostname
instead of gp_segment_configuration.address
. This implementation can result in a lower memory limit value compared to the earlier code, for deployments where each host uses multiple IP addresses. In some cases, this change in behavior could lead to Out Of Memory errors when upgrading from an earlier version. Version 6.23.4 introduces a configuration parameter, gp_count_host_segments_using_address
, that can be enabled to calculate of segment memory using gp_segment_configuration.address
if Out Of Memory errors are encountered after an upgrade. This parameter is disabled by default. This parameter will not be provided in Greenplum Version 7 because resource group memory calculation will no longer be dependent on the segments per host value.
Value Range | Default | Set Classifications |
---|---|---|
boolean | off | master, system, restart |
Controls table creation when a Greenplum Database table is created with a CREATE TABLE or CREATE TABLE AS command that does not contain a DISTRIBUTED BY clause.
For CREATE TABLE, if the value of the parameter is off
(the default), and the table creation command does not contain a DISTRIBUTED BY clause, Greenplum Database chooses the table distribution key based on the command:
LIKE
or INHERITS
clause is specified, then Greenplum copies the distribution key from the source or parent table.PRIMARY KEY
or UNIQUE
constraints are specified, then Greenplum chooses the largest subset of all the key columns as the distribution key.LIKE
or INHERITS
clause is specified, then Greenplum chooses the first suitable column as the distribution key. (Columns with geometric or user-defined data types are not eligible as Greenplum distribution key columns.)If the value of the parameter is set to on
, Greenplum Database follows these rules to create a table when the DISTRIBUTED BY clause is not specified:
For a CREATE TABLE AS command that does not contain a distribution clause:
off
, the table distribution policy is determined based on the command.on
, the table distribution policy is random.For information about the Postgres Planner and GPORCA, see "Querying Data" in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
boolean | off | master, session, reload |
The local content dbid of a segment.
Value Range | Default | Set Classifications |
---|---|---|
integer | read only |
Number of seconds for a Greenplum process to linger after a fatal internal error.
Value Range | Default | Set Classifications |
---|---|---|
Any valid time expression (number and unit) | 0 | master, session, reload |
Set the default values for the following table storage options when a table is created with the CREATE TABLE
command.
appendoptimized
NoteYou use the
appendoptimized=value
syntax to specify the append-optimized table storage type.appendoptimized
is a thin alias for theappendonly
legacy storage option. Greenplum Database storesappendonly
in the catalog, and displays the same when listing the storage options for append-optimized tables.
blocksize
checksum
compresstype
compresslevel
orientation
Specify multiple storage option values as a comma separated list.
You can set the storage options with this parameter instead of specifying the table storage options in the WITH
of the CREATE TABLE
command. The table storage options that are specified with the CREATE TABLE
command override the values specified by this parameter.
Not all combinations of storage option values are valid. If the specified storage options are not valid, an error is returned. See the CREATE TABLE
command for information about table storage options.
The defaults can be set for a database and user. If the server configuration parameter is set at different levels, this the order of precedence, from highest to lowest, of the table storage values when a user logs into a database and creates a table:
CREATE TABLE
command with the WITH
clause or ENCODING
clausegp_default_storage_options
that set for the user with the ALTER ROLE...SET
commandgp_default_storage_options
that is set for the database with the ALTER DATABASE...SET
commandgp_default_storage_options
that is set for the Greenplum Database system with the gpconfig
utilityThe parameter value is not cumulative. For example, if the parameter specifies the appendoptimized
and compresstype
options for a database and a user logs in and sets the parameter to specify the value for the orientation
option, the appendoptimized
, and compresstype
values set at the database level are ignored.
This example ALTER DATABASE
command sets the default orientation
and compresstype
table storage options for the database mytest
.
ALTER DATABASE mytest SET gp_default_storage_options = 'orientation=column, compresstype=rle_type'
To create an append-optimized table in the mytest
database with column-oriented table and RLE compression. The user needs to specify only appendoptimized=TRUE
in the WITH
clause.
This example gpconfig
utility command sets the default storage option for a Greenplum Database system. If you set the defaults for multiple table storage options, the value must be enclosed in single quotes.
gpconfig -c 'gp_default_storage_options' -v 'appendoptimized=true, orientation=column'
This example gpconfig
utility command shows the value of the parameter. The parameter value must be consistent across the Greenplum Database master and all segments.
gpconfig -s 'gp_default_storage_options'
Value Range | Default | Set Classifications1 |
---|---|---|
appendoptimized = TRUE or FALSE blocksize = integer between 8192 and 2097152checksum = TRUE or FALSE compresstype = ZLIB or ZSTD or QUICKLZ or RLE _TYPE or NONE compresslevel = integer between 0 and 19orientation = ROW | COLUMN |
appendoptimized =FALSE blocksize =32768 checksum =TRUE compresstype =none compresslevel =0 orientation =ROW |
master, session, reload |
Note1The set classification when the parameter is set at the system level with the
gpconfig
utility.
NoteQuickLZ compression is available only in the commercial release of VMware Greenplum. Support for the QuickLZ compression algorithm is deprecated and will be removed in the next major release of VMware Greenplum.
Maximum number of TCP keepalive retransmits from a Greenplum Query Dispatcher to its Query Executors. It controls the number of consecutive keepalive retransmits that can be lost before a connection between a Query Dispatcher and a Query Executor is considered dead.
Value Range | Default | Set Classifications |
---|---|---|
0 to 127 | 0 (it uses the system default) | master, system, restart |
Time in seconds between issuing TCP keepalives from a Greenplum Query Dispatcher to its Query Executors.
Value Range | Default | Set Classifications |
---|---|---|
0 to 32767 | 0 (it uses the system default) | master, system, restart |
Time in seconds between TCP keepalive retransmits from a Greenplum Query Dispatcher to its Query Executors.
Value Range | Default | Set Classifications |
---|---|---|
0 to 32767 | 0 (it uses the system default) | master, system, restart |
Enables plans that can dynamically eliminate the scanning of partitions.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Activates or deactivates two-phase aggregation for the Postgres Planner.
The default value is off
; the Planner chooses the best aggregate path for a query based on the cost. When set to on
, the Planner adds a deactivation cost to each of the first stage aggregate paths, which in turn forces the Planner to generate and choose a multi-stage aggregate path.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
Activates or deactivates two-phase aggregation to compute a single distinct-qualified aggregate. This applies only to subqueries that include a single distinct-qualified aggregate function.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Activates or deactivates three-phase aggregation and join to compute distinct-qualified aggregates. This applies only to subqueries that include one or more distinct-qualified aggregate functions.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Activates or deactivates the dispatching of targeted query plans for queries that access data on a single segment. When on, queries that target rows on a single segment will only have their query plan dispatched to that segment (rather than to all segments). This significantly reduces the response time of qualifying queries as there is no interconnect setup involved. Direct dispatch does require more CPU utilization on the master.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Controls availability of the EXCHANGE DEFAULT PARTITION
clause for ALTER TABLE
. The default value for the parameter is off
. The clause is not available and Greenplum Database returns an error if the clause is specified in an ALTER TABLE
command.
If the value is on
, Greenplum Database returns a warning stating that exchanging the default partition might result in incorrect results due to invalid data in the default partition.
CautionBefore you exchange the default partition, you must ensure the data in the table to be exchanged, the new default partition, is valid for the default partition. For example, the data in the new default partition must not contain data that would be valid in other leaf child partitions of the partitioned table. Otherwise, queries against the partitioned table with the exchanged default partition that are run by GPORCA might return incorrect results.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
When set to on
, the Postgres Planner plans single row inserts so that they are sent directly to the correct segment instance (no motion operation required). This significantly improves performance of single-row-insert statements.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Controls whether the Greenplum Database Global Deadlock Detector is enabled to manage concurrent UPDATE
and DELETE
operations on heap tables to improve performance. See Inserting, Updating, and Deleting Datain the Greenplum Database Administrator Guide. The default is off
, the Global Deadlock Detector is deactivated.
If the Global Deadlock Detector is deactivated (the default), Greenplum Database runs concurrent update and delete operations on a heap table serially.
If the Global Deadlock Detector is enabled, concurrent updates are permitted and the Global Deadlock Detector determines when a deadlock exists, and breaks the deadlock by cancelling one or more backend processes associated with the youngest transaction(s) involved.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, system, restart |
Activates or deactivates the data collection agents that populate the gpperfmon
database.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | local, system, restart |
Activates or deactivates gathering data to a single node to compute distinct-qualified aggregates on grouping extension queries. When this parameter and gp_enable_groupext_distinct_pruning
are both enabled, the Postgres Planner uses the cheaper plan.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Activates or deactivates three-phase aggregation and join to compute distinct-qualified aggregates on grouping extension queries. Usually, enabling this parameter generates a cheaper query plan that the Postgres Planner will use in preference to existing plan.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Activates or deactivates the use of two or three-stage parallel aggregation plans Postgres Planner. This approach applies to any subquery with aggregation. If gp_enable_multiphase_agg
is off, thengp_enable_agg_distinct
and gp_enable_agg_distinct_pruning
are deactivated.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
When enabled, the Postgres Planner applies query predicates to both table expressions in cases where the tables are joined on their distribution key column(s). Filtering both tables prior to doing the join (when possible) is more efficient.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Enables two-phase duplicate removal for SELECT DISTINCT
queries (not SELECT COUNT(DISTINCT)
). When enabled, it adds an extra SORT DISTINCT
set of plan nodes before motioning. In cases where the distinct operation greatly reduces the number of rows, this extra SORT DISTINCT
is much cheaper than the cost of sending the rows across the Interconnect.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Enables collection of query metrics. When query metrics collection is enabled, Greenplum Database collects metrics during query execution. The default is off.
After changing this configuration parameter, Greenplum Database must be restarted for the change to take effect.
The Greenplum Database metrics collection extension, when enabled, sends the collected metrics over UDP to a VMware Greenplum Command Center agent1.
Note1The metrics collection extension is included in VMware's commercial version of Greenplum Database. VMware Greenplum Command Center is supported only with VMware Greenplum.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, system, restart |
Enables GPORCA and the Postgres Planner to use the estimated size of a table (pg_relation_size
function) if there are no statistics for the table. By default, GPORCA and the planner use a default value to estimate the number of rows if statistics are not available. The default behavior improves query optimization time and reduces resource queue usage in heavy workloads, but can lead to suboptimal plans.
This parameter is ignored for a root partition of a partitioned table. When GPORCA is enabled and the root partition does not have statistics, GPORCA always uses the default value. You can use ANALZYE ROOTPARTITION
to collect statistics on the root partition. See ANALYZE.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
Controls whether the distribution policy for a table (from the table DISTRIBUTED
clause) is checked when data is copied into the table with the COPY FROM...ON SEGMENT
command. If true, an error is returned if a row of data violates the distribution policy for a segment instance. The default is true
.
If the value is false
, the distribution policy is not checked. The data added to the table might violate the table distribution policy for the segment instance. Manual redistribution of table data might be required. See the ALTER TABLE
clause WITH REORGANIZE
.
The parameter can be set for a database system or a session. The parameter cannot be set for a specific database.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, session, reload |
Enable duplicates to be removed while sorting.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Enable LIMIT
operation to be performed while sorting. Sorts more efficiently when the plan requires the first limit_number of rows at most.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Activates or deactivates the use of external tables that run OS commands or scripts on the segment hosts (CREATE EXTERNAL TABLE EXECUTE
syntax). Must be enabled if using the Command Center or MapReduce features.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, system, restart |
Sets the number of segments that will scan external table data during an external table operation, the purpose being not to overload the system with scanning data and take away resources from other concurrent operations. This only applies to external tables that use the gpfdist://
protocol to access external table data.
Value Range | Default | Set Classifications |
---|---|---|
integer | 64 | master, session, reload |
Enable filter pushdown when reading data from external tables. If pushdown fails, a query is run without pushing filters to the external data source (instead, Greenplum Database applies the same constraints to the result). See Defining External Tables for more information.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Specifies the polling interval for the fault detection process (ftsprobe
). The ftsprobe
process will take approximately this amount of time to detect a segment failure.
Value Range | Default | Set Classifications |
---|---|---|
10 - 3600 seconds | 1min | master, system, reload |
Specifies the number of times the fault detection process (ftsprobe
) attempts to connect to a segment before reporting segment failure.
Value Range | Default | Set Classifications |
---|---|---|
integer | 5 | master, system, reload |
Specifies the allowed timeout for the fault detection process (ftsprobe
) to establish a connection to a segment before declaring it down.
Value Range | Default | Set Classifications |
---|---|---|
10 - 3600 seconds | 20 secs | master, system, reload |
Specifies the maximum number of times that Greenplum Database attempts to establish a primary-mirror replication connection. When this count is exceeded, the fault detection process (ftsprobe
) stops retrying and marks the mirror down.
Value Range | Default | Set Classifications |
---|---|---|
0 - 100 | 10 | master, system, reload |
Specifies the executing interval (in seconds) of the global deadlock detector background worker process.
Value Range | Default | Set Classifications |
---|---|---|
5 - INT_MAX secs |
120 secs | master, system, reload |
If turned on, maintains the relation locks on all append-optimized leaf partitions involved in a query until the end of a transaction. Turning this parameter on can help avoid relatively rare visibility issues in queries, such as read beyond eof
when running concurrently with lazy VACUUM
(s) directly on the leaves.
NoteTurning
gp_keep_partition_children_locks
on implies that an additional lock will be held for each append-optimized child in each partition hierarchy involved in a query, until the end of transaction. You may require to increase the value ofmax_locks_per_transaction
.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | master, session, reload |
Controls the amount of parallel retrieve cursor endpoint detail that Greenplum Database writes to the server log file.
The default value is false
, Greenplum Database does not log endpoint details to the log file. When set to true
, Greenplum writes endpoint detail information to the log file.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | master, session, reload |
Controls the amount of detail the fault detection process (ftsprobe
) writes to the log file.
Value Range | Default | Set Classifications |
---|---|---|
OFF TERSE VERBOSE DEBUG |
TERSE | master, system, restart |
Controls the amount of information that is written to the log file about communication between Greenplum Database segment instance worker processes. The default value is terse
. The log information is written to both the master and segment instance logs.
Increasing the amount of logging could affect performance and increase disk space usage.
Value Range | Default | Set Classifications |
---|---|---|
off terse verbose debug |
terse | master, session, reload |
Controls the amount of information that is written to the log file about query worker process creation and query management. The default value is OFF
, do not log information.
Value Range | Default | Set Classifications |
---|---|---|
OFF TERSE VERBOSE DEBUG |
OFF | master, session, restart |
Controls the logging of per-statement sleep time when resource queue-based resource management is active. You can use this information for analysis of sleep time for queries.
The default value is false
, do not log the statement sleep time. When set to true
, Greenplum Database:
The information is written to the server log.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | master, session, reload |
Controls whether Greenplum logs statements that cause subtransaction overflow. See Checking for and Terminating Overflowed Backends.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload, superuser |
Sets the frequency that the Greenplum Database server processes send query execution updates to the data collection agent processes used to populate the gpperfmon
database. Query operations executed during this interval are sent through UDP to the segment monitor agents. If you find that an excessive number of UDP packets are dropped during long-running, complex queries, you may consider increasing this value.
Value Range | Default | Set Classifications |
---|---|---|
Any valid time expression (number and unit) | 1sec | master, session, reload, superuser |
Controls the time (in seconds) that Greenplum Database waits before returning an error when Greenplum Database is attempting to connect or write to a gpfdist server and gpfdist
does not respond. The default value is 300 (5 minutes). A value of 0 deactivates the timeout.
Value Range | Default | Set Classifications |
---|---|---|
0 - INT_MAX (2147483647) |
300 | local, session, reload |
Controls which message levels are written to the gpperfmon
log. Each level includes all the levels that follow it. The later the level, the fewer messages are sent to the log.
NoteIf the
gpperfmon
database is installed and is monitoring the database, the default value is warning.
Value Range | Default | Set Classifications |
---|---|---|
none warning error fatal panic |
none | local, session, reload |
Sets the target density of the hash table used by HashJoin operations. A smaller value will tend to produce larger hash tables, which can increase join performance.
Value Range | Default | Set Classifications |
---|---|---|
integer | 5 | master, session, reload |
Controls Greenplum Database behavior when the deprecated INTO ERROR TABLE
clause is specified in a CREATE EXTERNAL TABLE
or COPY
command.
NoteThe
INTO ERROR TABLE
clause was deprecated and removed in Greenplum Database 5. In Greenplum Database 7, this parameter will be removed as well, causing allINTO ERROR TABLE
invocations be yield a syntax error.
The default value is false
, Greenplum Database returns an error if the INTO ERROR TABLE
clause is specified in a command.
If the value is true
, Greenplum Database ignores the clause, issues a warning, and runs the command without the INTO ERROR TABLE
clause. In Greenplum Database 5.x and later, you access the error log information with built-in SQL functions. See the CREATE EXTERNAL TABLE or COPY command.
You can set this value to true
to avoid the Greenplum Database error when you run applications that run CREATE EXTERNAL TABLE
or COPY
commands that include the Greenplum Database 4.3.x INTO ERROR TABLE
clause.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | master, session, reload |
For the parameter value n, Greenplum Database stops processing input rows when you import data with the COPY
command or from an external table if the first n rows processed contain formatting errors. If a valid row is processed within the first n rows, Greenplum Database continues processing input rows.
Setting the value to 0 deactivates this limit.
The SEGMENT REJECT LIMIT
clause can also be specified for the COPY
command or the external table definition to limit the number of rejected rows.
INT_MAX
is the largest value that can be stored as an integer on your system.
Value Range | Default | Set Classifications |
---|---|---|
integer 0 - INT_MAX |
1000 | master, session, reload |
The amount of shared memory, in kilobytes, allocated for query metrics. The default is 5120 and the maximum is 131072. At startup, if gp_enable_query_metrics is set to on, Greenplum Database allocates space in shared memory to save query metrics. This memory is organized as a header and a list of slots. The number of slots needed depends on the number of concurrent queries and the number of execution plan nodes per query. The default value, 5120, is based on a Greenplum Database system that runs a maximum of about 250 concurrent queries with 120 nodes per query. If the gp_enable_query_metrics
configuration parameter is off, or if the slots are exhausted, the metrics are maintained in local memory instead of in shared memory.
Value Range | Default | Set Classifications |
---|---|---|
integer 0 - 131072 |
5120 | master, system, restart |
Specifies the type of address binding strategy Greenplum Database uses for communication between segment host sockets. There are two types: unicast
and wildcard
. The default is wildcard
.
When this parameter is set to unicast
, Greenplum Database uses the gp_segment_configuration.address
field to perform address binding. This reduces port usage on segment hosts and prevents interconnect traffic from being routed through unintended (and possibly slower) network interfaces.
When this parameter is set to wildcard
, Greenplum Database uses a wildcard address for binding, enabling the use of any network interface compliant with routing rules.
NoteIn some cases, inter-segment communication using the unicast strategy may not be possible. One example is if the source segment's address field and the destination segment's address field are on different subnets and/or existing routing rules do not allow for such communication. In these cases, you must configure this parameter to use a wildcard address for address binding.
Value Range | Default | Set Classifications |
---|---|---|
wildcard,unicast | wildcard | local, system, reload |
Specifies the size of the Cursor History Table for UDP interconnect. Although it is not usually necessary, you may increase it if running a user-defined function which contains many concurrent cursor queries hangs. The default value is 128.
Value Range | Default | Set Classifications |
---|---|---|
128-102400 | 128 | master, session, reload |
Specifies the interval, in seconds, to log Greenplum Database interconnect debugging messages when the server configuration parameter gp_log_interconnect is set to DEBUG
. The default is 10 seconds.
The log messages contain information about the interconnect communication between Greenplum Database segment instance worker processes. The information can be helpful when debugging network issues between segment instances.
Value Range | Default | Set Classifications |
---|---|---|
1 =< Integer < 4096 | 10 | master, session, reload |
Specifies the flow control method used for the default Greenplum Database UDPIFC interconnect.
For capacity based flow control, senders do not send packets when receivers do not have the capacity.
Loss based flow control is based on capacity based flow control, and also tunes the sending speed according to packet losses.
Value Range | Default | Set Classifications |
---|---|---|
CAPACITY LOSS |
LOSS | master, session, reload |
Sets the proxy ports that Greenplum Database uses when the server configuration parameter gp_interconnect_type is set to proxy
. Otherwise, this parameter is ignored. The default value is an empty string ("").
When the gp_interconnect_type
parameter is set to proxy
, You must specify a proxy port for the master, standby master, and all primary and mirror segment instances in this format:
<db_id>:<cont_id>:<seg_address>:<port>[, ... ]
For the master, standby master, and segment instance, the first three fields, db_id, cont_id, and seg_address can be found in the gp_segment_configuration catalog table. The fourth field, port, is the proxy port for the Greenplum master or a segment instance.
dbid
column in the catalog table.content
column in the catalog table.address
column in the catalog table.ImportantIf a segment instance hostname is bound to a different IP address at runtime, you must run
gpstop -U
to re-load thegp_interconnect_proxy_addresses
value.
You must specify the value as a single-quoted string. This gpconfig
command sets the value for gp_interconnect_proxy_addresses
as a single-quoted string. The Greenplum system consists of a master and a single segment instance.
gpconfig --skipvalidation -c gp_interconnect_proxy_addresses -v "'1:-1:192.168.180.50:35432,2:0:192.168.180.54:35000'"
For an example of setting gp_interconnect_proxy_addresses
, see Configuring Proxies for the Greenplum Interconnect.
Value Range | Default | Set Classifications |
---|---|---|
string (maximum length - 16384 bytes) | local, system, reload |
Sets the amount of data per-peer to be queued by the Greenplum Database interconnect on receivers (when data is received but no space is available to receive it the data will be dropped, and the transmitter will need to resend it) for the default UDPIFC interconnect. Increasing the depth from its default value will cause the system to use more memory, but may increase performance. It is reasonable to set this value between 1 and 10. Queries with data skew potentially perform better with an increased queue depth. Increasing this may radically increase the amount of memory used by the system.
Value Range | Default | Set Classifications |
---|---|---|
1-2048 | 4 | master, session, reload |
Specifies the amount of time, in seconds, that Greenplum Database waits for the interconnect to complete setup before it times out.
Value Range | Default | Set Classifications |
---|---|---|
0 - 7200 seconds | 7200 seconds (2 hours) | master, session, reload |
Sets the amount of data per-peer to be queued by the default UDPIFC interconnect on senders. Increasing the depth from its default value will cause the system to use more memory, but may increase performance. Reasonable values for this parameter are between 1 and 4. Increasing the value might radically increase the amount of memory used by the system.
Value Range | Default | Set Classifications |
---|---|---|
1 - 4096 | 2 | master, session, reload |
Specifies the amount of time, in seconds, that Greenplum Database waits for network transmission of interconnect traffic to complete before it times out.
Value Range | Default | Set Classifications |
---|---|---|
1 - 7200 seconds | 3600 seconds (1 hour) | master, session, reload |
Sets the networking protocol used for Greenplum Database interconnect traffic. UDPIFC specifies using UDP with flow control for interconnect traffic, and is the only value supported.
UDPIFC (the default) specifies using UDP with flow control for interconnect traffic. Specify the interconnect flow control method with gp_interconnect_fc_method.
With TCP as the interconnect protocol, Greenplum Database has an upper limit of 1000 segment instances - less than that if the query workload involves complex, multi-slice queries.
The PROXY
value specifies using the TCP protocol, and when running queries, using a proxy for Greenplum interconnect communication between the master instance and segment instances and between two segment instances. When this parameter is set to PROXY
, you must specify the proxy ports for the master and segment instances with the server configuration parameter gp_interconnect_proxy_addresses. For information about configuring and using proxies with the Greenplum interconnect, see Configuring Proxies for the Greenplum Interconnect.
Value Range | Default | Set Classifications |
---|---|---|
UDPIFC, TCP, PROXY | UDPIFC | local, session, reload |
Specifies the format of the server log files. If using gp_toolkit administrative schema, the log files must be in CSV format.
Value Range | Default | Set Classifications |
---|---|---|
csv text |
csv | local, system, restart |
Sets the maximum number of distributed transaction log entries to cache in the backend process memory of a segment instance.
The log entries contain information about the state of rows that are being accessed by an SQL statement. The information is used to determine which rows are visible to an SQL transaction when running multiple simultaneous SQL statements in an MVCC environment. Caching distributed transaction log entries locally improves transaction processing speed by improving performance of the row visibility determination process.
The default value is optimal for a wide variety of SQL processing environments.
Value Range | Default | Set Classifications |
---|---|---|
integer | 1024 | local, system, restart |
Sets the tuple-serialization chunk size for the Greenplum Database interconnect.
Value Range | Default | Set Classifications |
---|---|---|
512-65536 | 8192 | master, system, reload |
Specifies the maximum number of active parallel retrieve cursors allowed on a Greenplum Database cluster. A parallel retrieve cursor is considered active after it has been DECLARE
d, but before it is CLOSE
d or returns an error.
The default value is -1
; there is no limit on the number of open parallel retrieve cursors that may be concurrently active in the cluster (up to the maximum value of 1024).
You must be a superuser to change the gp_max_parallel_cursors
setting.
Value Range | Default | Set Classifications |
---|---|---|
-1 - 1024 | -1 | master, superuser, session, reload |
Specifies the total maximum uncompressed size of a query execution plan multiplied by the number of Motion operators (slices) in the plan. If the size of the query plan exceeds the value, the query is cancelled and an error is returned. A value of 0 means that the size of the plan is not monitored.
You can specify a value in kB
, MB
, or GB
. The default unit is kB
. For example, a value of 200
is 200kB. A value of 1GB
is the same as 1024MB
or 1048576kB
.
Value Range | Default | Set Classifications |
---|---|---|
integer | 0 | master, superuser, session, reload |
Specifies the maximum number of slices (portions of a query plan that are run on segment instances) that can be generated by a query. If the query generates more than the specified number of slices, Greenplum Database returns an error and does not run the query. The default value is 0
, no maximum value.
Running a query that generates a large number of slices might affect Greenplum Database performance. For example, a query that contains UNION
or UNION ALL
operators over several complex views can generate a large number of slices. You can run EXPLAIN ANALYZE
on the query to view slice statistics for the query.
Value Range | Default | Set Classifications |
---|---|---|
0 - INT_MAX | 0 | master, session, reload |
Sets the Postgres Planner cost estimate for a Motion operator to transfer a row from one segment to another, measured as a fraction of the cost of a sequential page fetch. If 0, then the value used is two times the value of cpu_tuple_cost.
Value Range | Default | Set Classifications |
---|---|---|
floating point | 0 | master, session, reload |
When a user starts a session with Greenplum Database and issues a query, the system creates groups or 'gangs' of worker processes on each segment to do the work. gp_print_create_gang_time
controls the display of additional information about gang creation, including gang reuse status and the shortest and longest connection establishment time to the segment.
The default value is false
, Greenplum Database does not display the additional gang creation information.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | master, session, reload |
Controls the availability of the RECURSIVE
keyword in the WITH
clause of a SELECT [INTO]
command, or a DELETE
, INSERT
or UPDATE
command. The keyword allows a subquery in the WITH
clause of a command to reference itself. The default value is true
, the RECURSIVE
keyword is allowed in the WITH
clause of a command.
For information about the RECURSIVE
keyword, see the SELECT command and WITH Queries (Common Table Expressions).
The parameter can be set for a database system, an individual database, or a session or query.
NoteThis parameter was previously named
gp_recursive_cte_prototype
, but has been renamed to reflect the current status of the implementation.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, session, restart |
For single row error handling on COPY and external table SELECTs, sets the number of rows processed before SEGMENT REJECT LIMIT n PERCENT starts calculating.
Value Range | Default | Set Classifications |
---|---|---|
1-n | 300 | master, session, reload |
If enabled, will attempt to dump core if a fatal server error occurs.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
NoteThe
gp_resgroup_memory_policy
server configuration parameter is enforced only when resource group-based resource management is active.
Used by a resource group to manage memory allocation to query operators.
When set to auto
, Greenplum Database uses resource group memory limits to distribute memory across query operators, allocating a fixed size of memory to non-memory-intensive operators and the rest to memory-intensive operators.
When you specify eager_free
, Greenplum Database distributes memory among operators more optimally by re-allocating memory released by operators that have completed their processing to operators in a later query stage.
Value Range | Default | Set Classifications |
---|---|---|
auto, eager_free | eager_free | local, system, superuser, reload |
NoteThe
gp_resource_group_bypass
server configuration parameter is enforced only when resource group-based resource management is active.
Activates or deactivates the enforcement of resource group concurrent transaction limits on Greenplum Database resources. The default value is false
, which enforces resource group transaction limits. Resource groups manage resources such as CPU, memory, and the number of concurrent transactions that are used by queries and external components such as PL/Container.
You can set this parameter to true
to bypass resource group concurrent transaction limitations so that a query can run immediately. For example, you can set the parameter to true
for a session to run a system catalog query or a similar query that requires a minimal amount of resources.
When you set this parameter to true
and a run a query, the query runs in this environment:
This parameter can be set for a session. The parameter cannot be set within a transaction or a function.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | local, session, reload |
NoteThe
gp_resource_group_bypass_catalog_query
server configuration parameter is enforced only when resource group-based resource management is active.
The default value for this configuration parameter is false
, Greenplum Database's resource group scheduler enforces resource group limits on catalog queries. Note that when false
and the database has reached the maximum amount of concurrent transactions, the scheduler can block queries that exclusively read from system catalogs.
When set to true
Greenplum Database's resource group scheduler bypasses all queries that fulfill both of the following criteria:
pg_catalog
schema tables onlyNoteIf a query contains a mix of
pg_catalog
and any other schema tables the scheduler will not bypass the query.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | local, session, reload |
Enables the Ceiling Enforcement mode when assigning CPU resources by Percentage. When deactivated, the Elastic mode will be used.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | local, system, restart |
NoteThe
gp_resource_group_cpu_limit
server configuration parameter is enforced only when resource group-based resource management is active.
Identifies the maximum percentage of system CPU resources to allocate to resource groups on each Greenplum Database segment node.
Value Range | Default | Set Classifications |
---|---|---|
0.1 - 1.0 | 0.9 | local, system, restart |
Sets the CPU priority for Greenplum processes relative to non-Greenplum processes when resource groups are enabled. For example, setting this parameter to 10
sets the ratio of allotted CPU resources for Greenplum processes to non-Greenplum processes to 10:1.
NoteThis ratio calculation applies only when the machine's CPU usage is at 100%.
Value Range | Default | Set Classifications |
---|---|---|
1 - 50 | 10 | local, system, restart |
NoteThe
gp_resource_group_enable_recalculate_query_mem
server configuration parameter is enforced only when resource group-based resource management is active.
Specifies whether or not Greenplum Database recalculates the maximum amount of memory to allocate on a segment host per query running in a resource group. The default value is false
, Greenplum database calculates the maximum per-query memory on a segment host based on the memory configuration and the number of primary segments on the master host. When set to true
, Greenplum Database recalculates the maximum per-query memory on a segment host based on the memory and the number of primary segments configured for that segment host.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | master, session, reload |
NoteThe
gp_resource_group_memory_limit
server configuration parameter is enforced only when resource group-based resource management is active.
Identifies the maximum percentage of system memory resources to allocate to resource groups on each Greenplum Database segment node.
Value Range | Default | Set Classifications |
---|---|---|
0.1 - 1.0 | 0.7 | local, system, restart |
NoteWhen resource group-based resource management is active, the memory allotted to a segment host is equally shared by active primary segments. Greenplum Database assigns memory to primary segments when the segment takes the primary role. The initial memory allotment to a primary segment does not change, even in a failover situation. This may result in a segment host utilizing more memory than the
gp_resource_group_memory_limit
setting permits.
For example, suppose your Greenplum Database cluster is utilizing the default gp_resource_group_memory_limit
of 0.7
and a segment host named seghost1
has 4 primary segments and 4 mirror segments. Greenplum Database assigns each primary segment on seghost1
(0.7 / 4 = 0.175)
of overall system memory. If failover occurs and two mirrors on seghost1
fail over to become primary segments, each of the original 4 primaries retain their memory allotment of 0.175
, and the two new primary segments are each allotted (0.7 / 6 = 0.116)
of system memory. seghost1
's overall memory allocation in this scenario is
0.7 + (0.116 * 2) = 0.932
which is above the percentage configured in the gp_resource_group_memory_limit
setting.
NoteThe
gp_resource_group_queuing_timeout
server configuration parameter is enforced only when resource group-based resource management is active.
Cancel a transaction queued in a resource group that waits longer than the specified number of milliseconds. The time limit applies separately to each transaction. The default value is zero; transactions are queued indefinitely and never time out.
Value Range | Default | Set Classifications |
---|---|---|
0 - INT_MAX millisecs |
0 millisecs | master, session, reload |
Identifies the resource management scheme currently enabled in the Greenplum Database cluster. The default scheme is to use resource queues. For information about Greenplum Database resource management, see Managing Resources.
Value Range | Default | Set Classifications |
---|---|---|
group queue |
queue | local, system, restart |
NoteThe
gp_resqueue_memory_policy
server configuration parameter is enforced only when resource queue-based resource management is active.
Enables Greenplum memory management features. The distribution algorithm eager_free
takes advantage of the fact that not all operators run at the same time(in Greenplum Database 4.2 and later). The query plan is divided into stages and Greenplum Database eagerly frees memory allocated to a previous stage at the end of that stage's execution, then allocates the eagerly freed memory to the new stage.
When set to none
, memory management is the same as in Greenplum Database releases prior to 4.1.
When set to auto
, query memory usage is controlled by statement_mem and resource queue memory limits.
Value Range | Default | Set Classifications |
---|---|---|
none, auto, eager_free | eager_free | local, session, reload |
NoteThe
gp_resqueue_priority
server configuration parameter is enforced only when resource queue-based resource management is active.
Activates or deactivates query prioritization. When this parameter is deactivated, existing priority settings are not evaluated at query run time.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | local, system, restart |
NoteThe
gp_resqueue_priority_cpucores_per_segment
server configuration parameter is enforced only when resource queue-based resource management is active.
Specifies the number of CPU units allocated to each segment instance on a segment host. If the segment is configured with primary-mirror segment instance pairs, use the number of primary segment instances on the host in the calculation. Include any CPU core that is available to the operating system, including virtual CPU cores, in the total number of available cores.
For example, if a Greenplum Database cluster has 10-core segment hosts that are configured with four primary segments, set the value to 2.5 on each segment host (10 divided by 4). A master host typically has only a single running master instance, so set the value on the master and standby maaster hosts to reflect the usage of all available CPU cores, in this case 10.
Incorrect settings can result in CPU under-utilization or query prioritization not working as designed.
Value Range | Default | Set Classifications |
---|---|---|
0.1 - 512.0 | 4 | local, system, restart |
NoteThe
gp_resqueue_priority_sweeper_interval
server configuration parameter is enforced only when resource queue-based resource management is active.
Specifies the interval at which the sweeper process evaluates current CPU usage. When a new statement becomes active, its priority is evaluated and its CPU share determined when the next interval is reached.
Value Range | Default | Set Classifications |
---|---|---|
500 - 15000 ms | 1000 | local, system, restart |
A session that you initiate with PGOPTIONS='-c gp_retrieve_conn=true'
is a retrieve session. You use a retrieve session to retrieve query result tuples from a specific endpoint instantiated for a parallel retrieve cursor.
The default value is false
.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | read only |
Specifies a minimum size for safe write operations to append-optimized tables in a non-mature file system. When a number of bytes greater than zero is specified, the append-optimized writer adds padding data up to that number in order to prevent data corruption due to file system errors. Each non-mature file system has a known safe write size that must be specified here when using Greenplum Database with that type of file system. This is commonly set to a multiple of the extent size of the file system; for example, Linux ext3 is 4096 bytes, so a value of 32768 is commonly used.
Value Range | Default | Set Classifications |
---|---|---|
integer | 0 | local, system, reload |
Time that the Greenplum interconnect will try to connect to a segment instance over the network before timing out. Controls the network connection timeout between master and primary segments, and primary to mirror segment replication processes.
Value Range | Default | Set Classifications |
---|---|---|
Any valid time expression (number and unit) | 3min | local, session, reload |
Sets the number of primary segment instances for the Postgres Planner to assume in its cost and size estimates. If 0, then the value used is the actual number of primary segments. This variable affects the Postgres Planner's estimates of the number of rows handled by each sending and receiving process in Motion operators.
Value Range | Default | Set Classifications |
---|---|---|
0-n | 0 | master, session, reload |
Reports the version number of the server as a string. A version modifier argument might be appended to the numeric portion of the version string, example: 5.0.0 beta.
Value Range | Default | Set Classifications |
---|---|---|
String. Examples: 5.0.0 | n/a | read only |
Reports the version number of the server as an integer. The number is guaranteed to always be increasing for each version and can be used for numeric comparisons. The major version is represented as is, the minor and patch versions are zero-padded to always be double digit wide.
Value Range | Default | Set Classifications |
---|---|---|
Mmmpp where M is the major version, mm is the minor version zero-padded and pp is the patch version zero-padded. Example: 50000 | n/a | read only |
A system assigned ID number for a client session. Starts counting from 1 when the master instance is first started.
Value Range | Default | Set Classifications |
---|---|---|
1-n | 14 | read only |
The role of this server process is set to dispatch for the master and execute for a segment.
Value Range | Default | Set Classifications |
---|---|---|
dispatch execute utility |
read only |
If enabled, when a Greenplum server process (postmaster) is started it will bind to a CPU.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, system, restart |
Set to on to deactivate writes to the database. Any in progress transactions must finish before read-only mode takes affect.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, system, restart |
Enables the use of statistics from child tables when planning queries on the parent table by the Postgres Planner.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
When enabled, the Postgres Planner will use the statistics of the referenced column in the parent table when a column is foreign key reference to another table instead of the statistics of the column itself.
NoteThis parameter is deprecated and will be removed in a future Greenplum Database release.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
For a table that is defined with a DISTRIBUTED BY key\_column
clause, this parameter controls the hash algorithm that is used to distribute table data among segment instances. The default value is false
, use the jump consistent hash algorithm.
Setting the value to true
uses the modulo hash algorithm that is compatible with Greenplum Database 5.x and earlier releases.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | master, session, reload |
If a database session is idle for longer than the time specified, the session will free system resources (such as shared memory), but remain connected to the database. This allows more concurrent connections to the database at one time.
Value Range | Default | Set Classifications |
---|---|---|
Any valid time expression (number and unit) | 18s | master, session, reload |
NoteThe
gp_vmem_protect_limit
server configuration parameter is enforced only when resource queue-based resource management is active.
Sets the amount of memory (in number of MBs) that all postgres
processes of an active segment instance can consume. If a query causes this limit to be exceeded, memory will not be allocated and the query will fail. Note that this is a local parameter and must be set for every segment in the system (primary and mirrors). When setting the parameter value, specify only the numeric value. For example, to specify 4096MB, use the value 4096
. Do not add the units MB
to the value.
To prevent over-allocation of memory, these calculations can estimate a safe gp_vmem_protect_limit
value.
First calculate the value of gp_vmem
. This is the Greenplum Database memory available on a host.
If the total system memory is less than 256 GB, use this formula:
gp_vmem = ((SWAP + RAM) – (7.5GB + 0.05 * RAM)) / 1.7
If the total system memory is equal to or greater than 256 GB, use this formula:
gp_vmem = ((SWAP + RAM) – (7.5GB + 0.05 * RAM)) / 1.17
where SWAP is the host swap space and RAM is the RAM on the host in GB.
Next, calculate the max_acting_primary_segments
. This is the maximum number of primary segments that can be running on a host when mirror segments are activated due to a failure. With mirrors arranged in a 4-host block with 8 primary segments per host, for example, a single segment host failure would activate two or three mirror segments on each remaining host in the failed host's block. The max_acting_primary_segments
value for this configuration is 11 (8 primary segments plus 3 mirrors activated on failure).
This is the calculation for gp_vmem_protect_limit
. The value should be converted to MB.
gp_vmem_protect_limit = <gp_vmem> / <acting_primary_segments>
For scenarios where a large number of workfiles are generated, this is the calculation for gp_vmem
that accounts for the workfiles.
If the total system memory is less than 256 GB:
<gp_vmem> = ((<SWAP> + <RAM>) – (7.5GB + 0.05 * <RAM> - (300KB * <total_#_workfiles>))) / 1.7
If the total system memory is equal to or greater than 256 GB:
<gp_vmem> = ((<SWAP> + <RAM>) – (7.5GB + 0.05 * <RAM> - (300KB * <total_#_workfiles>))) / 1.17
For information about monitoring and managing workfile usage, see the Greenplum Database Administrator Guide.
Based on the gp_vmem
value you can calculate the value for the vm.overcommit_ratio
operating system kernel parameter. This parameter is set when you configure each Greenplum Database host.
vm.overcommit_ratio = (<RAM> - (0.026 * <gp_vmem>)) / <RAM>
NoteThe default value for the kernel parameter
vm.overcommit_ratio
in Red Hat Enterprise Linux is 50.
For information about the kernel parameter, see the Greenplum Database Installation Guide.
Value Range | Default | Set Classifications |
---|---|---|
integer | 8192 | local, system, restart |
If a query executor process consumes more than this configured amount, then the process will not be cached for use in subsequent queries after the process completes. Systems with lots of connections or idle processes may want to reduce this number to free more memory on the segments. Note that this is a local parameter and must be set for every segment.
Value Range | Default | Set Classifications |
---|---|---|
number of megabytes | 500 | local, system, restart |
Specifies whether the temporary files created, when a hash aggregation or hash join operation spills to disk, are compressed.
If your Greenplum Database installation uses serial ATA (SATA) disk drives, enabling compression might help to avoid overloading the disk subsystem with IO operations.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
Sets the maximum number of temporary spill files (also known as workfiles) allowed per query per segment. Spill files are created when running a query that requires more memory than it is allocated. The current query is terminated when the limit is exceeded.
Set the value to 0 (zero) to allow an unlimited number of spill files. master session reload
Value Range | Default | Set Classifications |
---|---|---|
integer | 100000 | master, session, reload |
Sets the maximum disk size an individual query is allowed to use for creating temporary spill files at each segment. The default value is 0, which means a limit is not enforced.
Value Range | Default | Set Classifications |
---|---|---|
kilobytes | 0 | master, session, reload |
Sets the maximum total disk size that all running queries are allowed to use for creating temporary spill files at each segment. The default value is 0, which means a limit is not enforced.
Value Range | Default | Set Classifications |
---|---|---|
kilobytes | 0 | local, system, restart |
Controls the time (in seconds) that Greenplum Database waits before returning an error when Greenplum Database is attempting to connect or write to a gpfdist server and gpfdist
does not respond. The default value is 300 (5 minutes). A value of 0 deactivates the timeout.
Value Range | Default | Set Classifications |
---|---|---|
integer | 8888 | master, system, restart |
Only has effect if data_checksums is enabled.
Greenplum Database uses checksums to prevent loading data that has been corrupted in the file system into memory managed by database processes.
By default, when a checksum verify error occurs when reading a heap data page, Greenplum Database generates an error and prevents the page from being loaded into managed memory. When ignore_checksum_failure
is set to on and a checksum verify failure occurs, Greenplum Database generates a warning, and allows the page to be read into managed memory. If the page is then updated it is saved to disk and replicated to the mirror. If the page header is corrupt an error is reported even if this option is enabled.
CautionSetting
ignore_checksum_failure
to on may propagate or hide data corruption or lead to other serious problems. However, if a checksum failure has already been detected and the page header is uncorrupted, settingignore_checksum_failure
to on may allow you to bypass the error and recover undamaged tuples that may still be present in the table.
The default setting is off, and it can only be changed by a superuser.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | local, session, reload |
Reports whether PostgreSQL was built with support for 64-bit-integer dates and times.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | read only |
Sets the display format for interval values. The value sql_standard produces output matching SQL standard interval literals. The value postgres produces output matching PostgreSQL releases prior to 8.4 when the DateStyle parameter was set to ISO.
The value postgres_verbose produces output matching Greenplum releases prior to 3.3 when the DateStyle parameter was set to non-ISO output.
The value iso_8601 will produce output matching the time interval format with designators defined in section 4.4.3.2 of ISO 8601. See the PostgreSQL 9.4 documentation for more information.
Value Range | Default | Set Classifications |
---|---|---|
postgres postgres_verbose sql_standard iso_8601 |
postgres | master, session, reload |
The Postgres Planner will rewrite explicit inner JOIN
constructs into lists of FROM
items whenever a list of no more than this many items in total would result. By default, this variable is set the same as from_collapse_limit, which is appropriate for most uses. Setting it to 1 prevents any reordering of inner JOINs. Setting this variable to a value between 1 and from_collapse_limit might be useful to trade off planning time against the quality of the chosen plan (higher values produce better plans).
Value Range | Default | Set Classifications |
---|---|---|
1-n | 20 | master, session, reload |
Sets whether Kerberos user names should be treated case-insensitively. The default is case sensitive (off).
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, system, reload |
Sets the location of the Kerberos server key file.
Value Range | Default | Set Classifications |
---|---|---|
path and file name | unset | master, system, restart |
Reports the locale in which sorting of textual data is done. The value is determined when the Greenplum Database array is initialized.
Value Range | Default | Set Classifications |
---|---|---|
<system dependent> | read only |
Reports the locale that determines character classifications. The value is determined when the Greenplum Database array is initialized.
Value Range | Default | Set Classifications |
---|---|---|
<system dependent> | read only |
Sets the language in which messages are displayed. The locales available depends on what was installed with your operating system - use locale -a to list available locales. The default value is inherited from the execution environment of the server. On some systems, this locale category does not exist. Setting this variable will still work, but there will be no effect. Also, there is a chance that no translated messages for the desired language exist. In that case you will continue to see the English messages.
Value Range | Default | Set Classifications |
---|---|---|
<system dependent> | local, session, reload |
Sets the locale to use for formatting monetary amounts, for example with the to_char family of functions. The locales available depends on what was installed with your operating system - use locale -a to list available locales. The default value is inherited from the execution environment of the server.
Value Range | Default | Set Classifications |
---|---|---|
<system dependent> | local, session, reload |
Sets the locale to use for formatting numbers, for example with the to_char family of functions. The locales available depends on what was installed with your operating system - use locale -a to list available locales. The default value is inherited from the execution environment of the server.
Value Range | Default | Set Classifications |
---|---|---|
<system dependent> | local, system, restart |
This parameter currently does nothing, but may in the future.
Value Range | Default | Set Classifications |
---|---|---|
<system dependent> | local, system, restart |
Specifies the TCP/IP address(es) on which the server is to listen for connections from client applications - a comma-separated list of host names and/or numeric IP addresses. The special entry * corresponds to all available IP interfaces. If the list is empty, only UNIX-domain sockets can connect.
Value Range | Default | Set Classifications |
---|---|---|
localhost, host names, IP addresses, * (all available IP interfaces) | * | master, system, restart |
Comma separated list of shared library files to preload at the start of a client session.
Value Range | Default | Set Classifications |
---|---|---|
local, system, restart |
Abort any statement that waits longer than the specified number of milliseconds while attempting to acquire a lock on a table, index, row, or other database object. The time limit applies separately to each lock acquisition attempt. The limit applies both to explicit locking requests (such as LOCK TABLE
or SELECT FOR UPDATE
) and to implicitly-acquired locks. If log_min_error_statement
is set to ERROR
or lower, Greenplum Database logs the statement that timed out. A value of zero (the default) turns off this lock wait monitoring.
Unlike statement_timeout, this timeout can only occur while waiting for locks. Note that if statement_timeout
is nonzero, it is rather pointless to set lock_timeout
to the same or larger value, since the statement timeout would always trigger first.
Greenplum Database uses the deadlock_timeout and gp_global_deadlock_detector_period to trigger local and global deadlock detection. Note that if lock_timeout
is turned on and set to a value smaller than these deadlock detection timeouts, Greenplum Database will abort a statement before it would ever trigger a deadlock check in that session.
NoteSetting
lock_timeout
inpostgresql.conf
is not recommended because it would affect all sessions
Value Range | Default | Set Classifications |
---|---|---|
0 - INT_MAX millisecs |
0 millisecs | master, session, reload |
Logs information about automatic ANALYZE
operations related to gp_autostats_modeand gp_autostats_on_change_threshold.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload, superuser |
This outputs a line to the server log detailing each successful connection. Some client programs, like psql, attempt to connect twice while determining if a password is required, so duplicate "connection received" messages do not always indicate a problem.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | local, system, reload |
Causes checkpoints and restartpoints to be logged in the server log. Some statistics are included in the log messages, including the number of buffers written and the time spent writing them.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | local, system, reload |
This outputs a line in the server log at termination of a client session, and includes the duration of the session.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | local, system, reload |
When set to "on," this parameter adds a log message with verbose information about the dispatch of the statement.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | local, system, reload |
Causes the duration of every completed statement which satisfies log_statement to be logged.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload, superuser |
Controls the amount of detail written in the server log for each message that is logged.
Value Range | Default | Set Classifications |
---|---|---|
TERSE DEFAULT VERBOSE |
DEFAULT | master, session, reload, superuser |
For each query, write performance statistics of the query executor to the server log. This is a crude profiling instrument. Cannot be enabled together with log_statement_stats.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | local, system, restart |
On Unix systems this parameter sets the permissions for log files when logging_collector
is enabled. The parameter value is expected to be a numeric mode specified in the format accepted by the chmod
and umask
system calls.
Value Range | Default | Set Classifications |
---|---|---|
numeric UNIX file permission mode (as accepted by the chmod or umask commands) | 0600 | local, system, reload |
By default, connection log messages only show the IP address of the connecting host. Turning on this option causes logging of the IP address and host name of the Greenplum Database master. Note that depending on your host name resolution setup this might impose a non-negligible performance penalty.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, system, reload |
Logs the statement and its duration on a single log line if its duration is greater than or equal to the specified number of milliseconds. Setting this to 0 will print all statements and their durations. -1 deactivates the feature. For example, if you set it to 250 then all SQL statements that run 250ms or longer will be logged. Enabling this option can be useful in tracking down unoptimized queries in your applications.
Value Range | Default | Set Classifications |
---|---|---|
number of milliseconds, 0, -1 | -1 | master, session, reload, superuser |
Controls whether or not the SQL statement that causes an error condition will also be recorded in the server log. All SQL statements that cause an error of the specified level or higher are logged. The default is ERROR. To effectively turn off logging of failing statements, set this parameter to PANIC.
Value Range | Default | Set Classifications |
---|---|---|
DEBUG5 DEBUG4 DEBUG3 DEBUG2 DEBUG1 INFO NOTICE WARNING ERROR FATAL PANIC |
ERROR | master, session, reload, superuser |
Controls which message levels are written to the server log. Each level includes all the levels that follow it. The later the level, the fewer messages are sent to the log.
If the Greenplum Database PL/Container extension is installed. This parameter also controls the PL/Container log level. For information about the extension, see PL/pgSQL Language.
Value Range | Default | Set Classifications |
---|---|---|
DEBUG5 DEBUG4 DEBUG3 DEBUG2 DEBUG1 INFO NOTICE WARNING LOG ERROR FATAL PANIC |
WARNING | master, session, reload, superuser |
For each query, write performance statistics of the query parser to the server log. This is a crude profiling instrument. Cannot be enabled together with log_statement_stats.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload, superuser |
For each query, write performance statistics of the Postgres Planner to the server log. This is a crude profiling instrument. Cannot be enabled together with log_statement_stats.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload, superuser |
Determines the amount of time Greenplum Database writes messages to the active log file. When this amount of time has elapsed, the file is closed and a new log file is created. Set to zero to deactivate time-based creation of new log files.
Value Range | Default | Set Classifications |
---|---|---|
Any valid time expression (number and unit) | 1d | local, system, restart |
Determines the size of an individual log file that triggers rotation. When the log file size is equal to or greater than this size, the file is closed and a new log file is created. Set to zero to deactivate size-based creation of new log files.
The maximum value is INT_MAX/1024. If an invalid value is specified, the default value is used. INT_MAX is the largest value that can be stored as an integer on your system.
Value Range | Default | Set Classifications |
---|---|---|
number of kilobytes | 1048576 | local, system, restart |
Controls which SQL statements are logged. DDL logs all data definition commands like CREATE, ALTER, and DROP commands. MOD logs all DDL statements, plus INSERT, UPDATE, DELETE, TRUNCATE, and COPY FROM. PREPARE and EXPLAIN ANALYZE statements are also logged if their contained command is of an appropriate type.
Value Range | Default | Set Classifications |
---|---|---|
NONE DDL MOD ALL |
ALL | master, session, reload, superuser |
For each query, write total performance statistics of the query parser, planner, and executor to the server log. This is a crude profiling instrument.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload, superuser |
Controls logging of temporary file names and sizes. Temporary files can be created for sorts, hashes, temporary query results and spill files. A log entry is made in log
for each temporary file when it is deleted. Depending on the source of the temporary files, the log entry could be created on either the master and/or segments. A log_temp_files
value of zero logs all temporary file information, while positive values log only files whose size is greater than or equal to the specified number of kilobytes. The default setting is -1
, which deactivates logging. Only superusers can change this setting.
Value Range | Default | Set Classifications |
---|---|---|
Integer | -1 | local, session, reload |
Sets the time zone used for timestamps written in the log. Unlike TimeZone, this value is system-wide, so that all sessions will report timestamps consistently. The default is unknown
, which means to use whatever the system environment specifies as the time zone.
Value Range | Default | Set Classifications |
---|---|---|
string | unknown | local, system, restart |
Truncates (overwrites), rather than appends to, any existing log file of the same name. Truncation will occur only when a new file is being opened due to time-based rotation. For example, using this setting in combination with a log_filename such as gpseg#-%H.log
would result in generating twenty-four hourly log files and then cyclically overwriting them. When off, pre-existing files will be appended to in all cases.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | local, system, reload |
Specifies the maximum amount of memory to be used in maintenance operations, such as VACUUM
and CREATE INDEX
. It defaults to 16 megabytes (16MB). Larger settings might improve performance for vacuuming and for restoring database dumps.
Value Range | Default | Set Classifications |
---|---|---|
Integer | 16 | local, system, reload |
Sets the maximum number of concurrent transactions that can write to or update append-optimized tables. Transactions that exceed the maximum return an error.
Operations that are counted are INSERT
, UPDATE
, COPY
, and VACUUM
operations. The limit is only for in-progress transactions. Once a transaction ends (either aborted or committed), it is no longer counted against this limit.
NoteGreenplum Database limits the maximum number of concurrent inserts into an append-only table to 127.
For operations against a partitioned table, each subpartition (child table) that is an append-optimized table and is changed counts as a single table towards the maximum. For example, a partitioned table p_tbl
is defined with three subpartitions that are append-optimized tables p_tbl_ao1
, p_tbl_ao2
, and p_tbl_ao3
. An INSERT
or UPDATE
command against the partitioned table p_tbl
that changes append-optimized tables p_tbl_ao1
and p_tbl_ao2
is counted as two transactions.
Increasing the limit allocates more shared memory on the master host at server start.
Value Range | Default | Set Classifications |
---|---|---|
integer > 0 | 10000 | master, system, restart |
The maximum number of concurrent connections to the database server. In a Greenplum Database system, user client connections go through the Greenplum master instance only. Segment instances should allow 3-10 times the amount as the master. When you increase this parameter, max_prepared_transactions must be increased as well. For more information about limiting concurrent connections, see "Configuring Client Authentication" in the Greenplum Database Administrator Guide.
Increasing this parameter may cause Greenplum Database to request more shared memory. See shared_buffers for information about Greenplum server instance shared memory buffers.
Value Range | Default | Set Classifications |
---|---|---|
10 - 8388607 | 250 on master 750 on segments |
local, system, restart |
Sets the maximum number of simultaneously open files allowed to each server subprocess. If the kernel is enforcing a safe per-process limit, you don't need to worry about this setting. Some platforms such as BSD, the kernel will allow individual processes to open many more files than the system can really support.
Value Range | Default | Set Classifications |
---|---|---|
integer | 1000 | local, system, restart |
Reports the maximum number of function arguments.
Value Range | Default | Set Classifications |
---|---|---|
integer | 100 | read only |
Reports the maximum identifier length.
Value Range | Default | Set Classifications |
---|---|---|
integer | 63 | read only |
Reports the maximum number of index keys.
Value Range | Default | Set Classifications |
---|---|---|
integer | 32 | read only |
The shared lock table is created with room to describe locks on max_locks_per_transaction * (max_connections + max_prepared_transactions) objects, so no more than this many distinct objects can be locked at any one time. This is not a hard limit on the number of locks taken by any one transaction, but rather a maximum average value. You might need to raise this value if you have clients that touch many different tables in a single transaction.
Value Range | Default | Set Classifications |
---|---|---|
integer | 128 | local, system, restart |
Sets the maximum number of transactions that can be in the prepared state simultaneously. Greenplum uses prepared transactions internally to ensure data integrity across the segments. This value must be at least as large as the value of max_connections on the master. Segment instances should be set to the same value as the master.
Value Range | Default | Set Classifications |
---|---|---|
integer | 250 on master 250 on segments |
local, system, restart |
NoteThe
max_resource_portals_per_transaction
server configuration parameter is enforced only when resource queue-based resource management is active.
Sets the maximum number of simultaneously open user-declared cursors allowed per transaction. Note that an open cursor will hold an active query slot in a resource queue. Used for resource management.
Value Range | Default | Set Classifications |
---|---|---|
integer | 64 | master, system, restart |
NoteThe
max_resource_queues
server configuration parameter is enforced only when resource queue-based resource management is active.
Sets the maximum number of resource queues that can be created in a Greenplum Database system. Note that resource queues are system-wide (as are roles) so they apply to all databases in the system.
Value Range | Default | Set Classifications |
---|---|---|
integer | 9 | master, system, restart |
Sets the maximum size in megabytes of Write-Ahead Logging (WAL) files on disk per segment instance that can be reserved when Greenplum streams data to the mirror segment instance or standby master to keep it synchronized with the corresponding primary segment instance or master. The default is -1, Greenplum can retain an unlimited amount of WAL files on disk.
If the file size exceeds the maximum size, the files are released and are available for deletion. A mirror or standby may no longer be able to continue replication due to removal of required WAL files.
CautionIf
max_slot_wal_keep_size
is set to a non-default value for acting primaries, full and incremental recovery of their mirrors may not be possible. Depending on the workload on the primary running concurrently with a full recovery, the recovery may fail with a missing WAL error. Therefore, you must ensure thatmax_slot_wal_keep_size
is set to the default of-1
or a high enough value before running full recovery. Similarly, depending on how behind the downed mirror is, an incremental recovery of it may fail with a missing WAL complaint. In this case, full recovery would be the only recourse.
Value Range | Default | Set Classifications |
---|---|---|
Integer | -1 | local, system, reload |
Specifies the maximum safe depth of the server's execution stack. The ideal setting for this parameter is the actual stack size limit enforced by the kernel (as set by ulimit -s or local equivalent), less a safety margin of a megabyte or so. Setting the parameter higher than the actual kernel limit will mean that a runaway recursive function can crash an individual backend process.
Value Range | Default | Set Classifications |
---|---|---|
number of kilobytes | 2MB | local, session, reload |
Sets the maximum memory limit for a query. Helps avoid out-of-memory errors on a segment host during query processing as a result of setting statement_mem too high.
Taking into account the configuration of a single segment host, calculate max_statement_mem
as follows:
(seghost_physical_memory) / (average_number_concurrent_queries)
When changing both max_statement_mem
and statement_mem
, max_statement_mem
must be changed first, or listed first in the postgresql.conf file.
Value Range | Default | Set Classifications |
---|---|---|
number of kilobytes | 2000MB | master, session, reload, superuser |
NoteThe
memory_spill_ratio
server configuration parameter is enforced only when resource group-based resource management is active.
Sets the memory usage threshold percentage for memory-intensive operators in a transaction. When a transaction reaches this threshold, it spills to disk.
The default memory_spill_ratio
percentage is the value defined for the resource group assigned to the currently active role. You can set memory_spill_ratio
at the session level to selectively set this limit on a per-query basis. For example, if you have a specific query that spills to disk and requires more memory, you may choose to set a larger memory_spill_ratio
to increase the initial memory allocation.
You can specify an integer percentage value from 0 to 100 inclusive. If you specify a value of 0, Greenplum Database uses the statement_mem
server configuration parameter value to control the initial query operator memory amount.
Value Range | Default | Set Classifications |
---|---|---|
0 - 100 | 20 | master, session, reload |
Activates or deactivates GPORCA when running SQL queries. The default is on
. If you deactivate GPORCA, Greenplum Database uses only the Postgres Planner.
GPORCA co-exists with the Postgres Planner. With GPORCA enabled, Greenplum Database uses GPORCA to generate an execution plan for a query when possible. If GPORCA cannot be used, then the Postgres Planner is used.
The optimizer parameter can be set for a database system, an individual database, or a session or query.
For information about the Postgres Planner and GPORCA, see Querying Data in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
For a partitioned table, controls whether the ROOTPARTITION
keyword is required to collect root partition statistics when the ANALYZE
command is run on the table. GPORCA uses the root partition statistics when generating a query plan. The Postgres Planner does not use these statistics.
The default setting for the parameter is on
, the ANALYZE
command can collect root partition statistics without the ROOTPARTITION
keyword. Root partition statistics are collected when you run ANALYZE
on the root partition, or when you run ANALYZE
on a child leaf partition of the partitioned table and the other child leaf partitions have statistics. When the value is off
, you must run ANALZYE ROOTPARTITION
to collect root partition statistics.
When the value of the server configuration parameter optimizer is on
(the default), the value of this parameter should also be on
. For information about collecting table statistics on partitioned tables, see ANALYZE.
For information about the Postgres Planner and GPORCA, see Querying Data in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
When GPORCA is enabled (the default) and is processing a query that contains a predicate with a constant array, the optimizer_array_expansion_threshold
parameter limits the optimization process based on the number of constants in the array. If the array in the query predicate contains more than the number elements specified by parameter, GPORCA deactivates the transformation of the predicate into its disjunctive normal form during query optimization.
The default value is 100.
For example, when GPORCA is running a query that contains an IN
clause with more than 100 elements, GPORCA does not transform the predicate into its disjunctive normal form during query optimization to reduce optimization time consume less memory. The difference in query processing can be seen in the filter condition for the IN
clause of the query EXPLAIN
plan.
Changing the value of this parameter changes the trade-off between a shorter optimization time and lower memory consumption, and the potential benefits from constraint derivation during query optimization, for example conflict detection and partition elimination.
The parameter can be set for a database system, an individual database, or a session or query.
Value Range | Default | Set Classifications |
---|---|---|
Integer > 0 | 25 | master, session, reload |
Controls whether the server configuration parameter optimizer can be changed with SET, the RESET command, or the Greenplum Database utility gpconfig. If the optimizer_control
parameter value is on
, users can set the optimizer parameter. If the optimizer_control
parameter value is off
, the optimizer parameter cannot be changed.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload, superuser |
When GPORCA is enabled (the default), this parameter controls the cost model that GPORCA chooses for bitmap scans used with bitmap indexes or with btree indexes on AO tables.
legacy
- preserves the calibrated cost model used by GPORCA in Greenplum Database releases 6.13 and earliercalibrated
- improves cost estimates for indexesexperimental
- reserved for future experimental cost models; currently equivalent to the calibrated
modelThe default cost model, calibrated
, is more likely to choose a faster bitmap index with nested loop joins instead of hash joins.
Value Range | Default | Set Classifications |
---|---|---|
legacy calibrated experimental |
calibrated | master, session, reload |
When GPORCA is enabled (the default), this parameter controls the amount of inlining performed for common table expression (CTE) queries (queries that contain a WHERE
clause). The default value, 0, deactivates inlining.
The parameter can be set for a database system, an individual database, or a session or query.
Value Range | Default | Set Classifications |
---|---|---|
Decimal >= 0 | 0 | master, session, reload |
When GPORCA is enabled (the default) and this parameter is true
(the default), GPORCA derives statistics that allow it to more accurately estimate the number of rows to be scanned during dynamic partition elimination.
The parameter can be set for a database system, an individual database, or a session or query.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, session, reload |
When GPORCA is enabled (the default), this parameter specifies whether the Query Optimizer should eliminate plans that include a HashJoin operator with a Redistribute Motion child. Eliminating such plans can improve performance in cases where the data being joined exhibits high skewness in the join keys.
The default setting is off
, GPORCA considers all plan alternatives, including those with a Redistribute Motion child, in the HashJoin operator. If you observe performance issues with queries that use a HashJoin with highly skewed data, you may want to consider setting optimizer_discard_redistribute_hashjoin
to on
to instruct GPORCA to discard such plans.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
When GPORCA is enabled (the default), this parameter controls whether the join associativity transform is enabled during query optimization. The transform analyzes join orders. For the default value off
, only the GPORCA dynamic programming algorithm for analyzing join orders is enabled. The join associativity transform largely duplicates the functionality of the newer dynamic programming algorithm.
If the value is on
, GPORCA can use the associativity transform during query optimization.
The parameter can be set for a database system, an individual database, or a session or query.
For information about GPORCA, see About GPORCA in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
When GPORCA is enabled (the default) and this parameter is true
(the default), GPORCA attempts to run DML commands such as INSERT
, UPDATE
, and DELETE
. If GPORCA cannot run the command, Greenplum Database falls back to the Postgres Planner.
When set to false
, Greenplum Database always falls back to the Postgres Planner when performing DML commands.
The parameter can be set for a database system, an individual database, or a session or query.
For information about GPORCA, see About GPORCA in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, session, reload |
When GPORCA is enabled (the default) and this parameter is true
(the default), GPORCA can generate all hash join plan types except for full hash join. Those may include inner, left, right, left semi, and left anti semi hash joins. This provides a query execution performance improvement, particularly when the table is large and is not indexed.
When deactivated (false
), GPORCA does not generate hash join plan types.
The parameter can be set for a database system, an individual database, or a session or query.
For information about GPORCA, see About GPORCA in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, session, reload |
When GPORCA is enabled (the default) and this parameter is false
(the default), GPORCA cannot generate the full hash join plan type.
When activated (true
), GPORCA generates the full hash join plan type. This provides a query execution performance improvement, particularly when the table is large and is not sorted.
The parameter can be set for a database system, an individual database, or a session or query.
For information about GPORCA, see About GPORCA in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | master, session, reload |
When GPORCA is enabled (the default) and this parameter is true
(the default), GPORCA can generate index-only scan plan types for B-tree indexes. GPORCA accesses the index values only, not the data blocks of the relation. This provides a query execution performance improvement, particularly when the table has been vacuumed, has wide columns, and GPORCA does not need to fetch any data blocks (for example, they are visible).
When deactivated (false
), GPORCA does not generate index-only scan plan types.
The parameter can be set for a database system, an individual database, or a session or query.
For information about GPORCA, see About GPORCA in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, session, reload |
When GPORCA is enabled (the default), this parameter allows GPORCA to run catalog queries that run only on the Greenplum Database master. For the default value off
, only the Postgres Planner can run catalog queries that run only on the Greenplum Database master.
The parameter can be set for a database system, an individual database, or a session or query.
NoteEnabling this parameter decreases performance of short running catalog queries. To avoid this issue, set this parameter only for a session or a query.
For information about GPORCA, see About GPORCA in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
When GPORCA is enabled (the default), this parameter allows GPORCA to support Multiple Distinct Qualified Aggregates, such as SELECT count(DISTINCT a),sum(DISTINCT b) FROM foo
. This parameter is deactivated by default because its plan is generally suboptimal in comparison to the plan generated by the Postgres planner.
The parameter can be set for a database system, an individual database, or a session or query.
For information about GPORCA, see About GPORCA in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
When GPORCA is enabled (the default), this parameter determines whether or not GPORCA generates a query plan for ordered aggregates. This parameter is deactivated by default; GPORCA does not generate a plan for a query that includes an ordered aggregate, and the query falls back to the Postgres Planner.
You can set this parameter for a database system, an individual database, or a session or query.
When GPORCA is enabled (the default), this parameter controls GPORCA's behavior when it encounters DML operations on a replicated table.
The default value is on
, GPORCA attempts to plan and execute operations on replicated tables. When off
, GPORCA immediately falls back to the Postgres Planner when it detects replicated table operations.
The parameter can be set for a database system, an individual database, or a session or query.
For information about GPORCA, see About GPORCA in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
When GPORCA is enabled (the default), this parameter affects the query plan alternatives that GPORCA considers when 3 stage aggregate plans are generated. When the value is true
, the default, GPORCA considers only 3 stage aggregate plans where the intermediate aggregation uses the GROUP BY
and DISTINCT
columns for distribution to reduce the effects of processing skew.
If the value is false
, GPORCA can also consider a plan that uses GROUP BY
columns for distribution. These plans might perform poorly when processing skew is present.
For information about GPORCA, see About GPORCA in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, session, reload |
When GPORCA is enabled (the default), this parameter affects its consideration of nested loop join and hash join alternatives.
The default value is false
, GPORCA does not consider nested loop join alternatives when a hash join is available, which significantly improves optimization performance for most queries. When set to true
, GPORCA will explore nested loop join alternatives even when a hash join is possible.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | master, session, reload |
For the default settings, GPORCA is enabled and this parameter is false
, GPORCA makes a cost-based choice between a one- or two-stage aggregate plan for a scalar distinct qualified aggregate. When true
, GPORCA chooses a multi-stage aggregate plan when such a plan alternative is generated.
The parameter can be set for a database system, an individual database, or a session or query.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | master, session, reload |
For the default settings, GPORCA is enabled and this parameter is true
, GPORCA chooses a plan with multistage aggregates when such a plan alternative is generated. When the value is false
, GPORCA makes a cost based choice rather than a heuristic choice.
The parameter can be set for a database system, an individual database, or a session, or query.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, session, reload |
The value is an optimization hint to limit the number of join associativity and join commutativity transformations explored during query optimization. The limit controls the alternative plans that GPORCA considers during query optimization. For example, the default value of 18 is an optimization hint for GPORCA to stop exploring join associativity and join commutativity transformations when an n-ary join operator has more than 18 children during optimization.
For a query with a large number of joins, specifying a lower value improves query performance by limiting the number of alternate query plans that GPORCA evaluates. However, setting the value too low might cause GPORCA to generate a query plan that performs sub-optimally.
This parameter has no effect when the optimizer_join_order
parameter is set to query
or greedy
.
This parameter can be set for a database system or a session.
Value Range | Default | Set Classifications |
---|---|---|
integer > 0 | 18 | local, system, reload |
When GPORCA is enabled (the default), this parameter sets the join enumeration algorithm:
query
- Uses the join order specified in the query.greedy
- Evaluates the join order specified in the query and alternatives based on minimum cardinalities of the relations in the joins.exhaustive
- Applies transformation rules to find and evaluate up to a configurable threshold number (optimizer_join_order_threshold
, default 10) of n-way inner joins, and then changes to and uses the greedy
method beyond that. While planning time drops significantly at that point, plan quality and execution time may get worse.exhaustive2
- Operates with an emphasis on generating join orders that are suitable for dynamic partition elimination. This algorithm applies transformation rules to find and evaluate n-way inner and outer joins. When evaluating very large joins with more than optimizer_join_order_threshold
(default 10) tables, this algorithm employs a gradual transition to the greedy
method; planning time goes up smoothly as the query gets more complicated, and plan quality and execution time only gradually degrade. exhaustive2
provides a good trade-off between planning time and execution time for many queries.Setting this parameter to query
or greedy
can generate a suboptimal query plan. However, if the administrator is confident that a satisfactory plan is generated with the query
or greedy
setting, query optimization time may be improved by setting the parameter to the lower optimization level.
When you set this parameter to query
or greedy
, GPORCA ignores the optimizer_join_order_threshold
parameter.
This parameter can be set for an individual database, a session, or a query.
Value Range | Default | Set Classifications |
---|---|---|
query greedy exhaustive exhaustive2 |
exhaustive | master, session, reload |
When GPORCA is enabled (the default), this parameter sets the maximum number of join children for which GPORCA will use the dynamic programming-based join ordering algorithm. This threshold restricts the search effort for a join plan to reasonable limits.
GPORCA examines the optimizer_join_order_threshold
parameter when optimizer_join_order
is set to exhaustive
or exhaustive2
. GPORCA ignores this parameter when optimizer_join_order
is set to query
or greedy
.
You can set this value for a single query or for an entire session.
Value Range | Default | Set Classifications |
---|---|---|
0 - 12 | 10 | master, session, reload |
Sets the maximum amount of memory on the Greenplum Database master that GPORCA uses to cache query metadata (optimization data) during query optimization. The memory limit session based. GPORCA caches query metadata during query optimization with the default settings: GPORCA is enabled and optimizer_metadata_caching is on
.
The default value is 16384 (16MB). This is an optimal value that has been determined through performance analysis.
You can specify a value in KB, MB, or GB. The default unit is KB. For example, a value of 16384 is 16384KB. A value of 1GB is the same as 1024MB or 1048576KB. If the value is 0, the size of the cache is not limited.
This parameter can be set for a database system, an individual database, or a session or query.
Value Range | Default | Set Classifications |
---|---|---|
Integer >= 0 | 16384 | master, session, reload |
When GPORCA is enabled (the default), this parameter specifies whether GPORCA caches query metadata (optimization data) in memory on the Greenplum Database master during query optimization. The default for this parameter is on
, enable caching. The cache is session based. When a session ends, the cache is released. If the amount of query metadata exceeds the cache size, then old, unused metadata is evicted from the cache.
If the value is off
, GPORCA does not cache metadata during query optimization.
This parameter can be set for a database system, an individual database, or a session or query.
The server configuration parameter optimizer_mdcache_size controls the size of the query metadata cache.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
GPORCA generates minidump files to describe the optimization context for a given query. The information in the file is not in a format that can be easily used for debugging or troubleshooting. The minidump file is located under the master data directory and uses the following naming format:
Minidump_date_time.mdp
The minidump file contains this query related information:
Setting this parameter to ALWAYS
generates a minidump for all queries. Set this parameter to ONERROR
to minimize total optimization time.
For information about GPORCA, see About GPORCA in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
ONERROR ALWAYS |
ONERROR | master, session, reload |
This parameter adds a costing factor to GPORCA to prioritize hash joins instead of nested loop joins during query optimization. The default value of 1024 was chosen after evaluating numerous workloads with uniformly distributed data. 1024 should be treated as the practical upper bound setting for this parameter. If you find the GPORCA selects hash joins more often than it should, reduce the value to shift the costing factor in favor of nested loop joins.
The parameter can be set for a database system, an individual database, or a session or query.
Value Range | Default | Set Classifications |
---|---|---|
INT_MAX > 1 | 1024 | master, session, reload |
When GPORCA is enabled (the default), optimizer_parallel_union
controls the amount of parallelization that occurs for queries that contain a UNION
or UNION ALL
clause.
When the value is off
, the default, GPORCA generates a query plan where each child of an APPEND(UNION) operator is in the same slice as the APPEND operator. During query execution, the children are run in a sequential manner.
When the value is on
, GPORCA generates a query plan where a redistribution motion node is under an APPEND(UNION) operator. During query execution, the children and the parent APPEND operator are on different slices, allowing the children of the APPEND(UNION) operator to run in parallel on segment instances.
The parameter can be set for a database system, an individual database, or a session or query.
Value Range | Default | Set Classifications |
---|---|---|
boolean | off | master, session, reload |
When GPORCA is enabled (the default), during query optimization GPORCA penalizes the cost of plans that attempt to broadcast more than the value specified by optimizer_penalize_broadcast_threshold
. For example, if this parameter is set to 100K rows (the default), any broadcast of more than 100K rows is heavily penalized.
When this parameter is set to 0
, GPORCA sets this broadcast threshold to unlimited and never penalizes a broadcast motion.
Value Range | Default | Set Classifications |
---|---|---|
integer >= 0 | 100K rows | master, session, reload |
When GPORCA is enabled (the default), this parameter allows GPORCA to penalize the local cost of a HashJoin with a skewed Redistribute Motion as child to favor a Broadcast Motion during query optimization. The default value is true
.
GPORCA determines there is skew for a Redistribute Motion when the NDV (number of distinct values) is less than the number of segments.
The parameter can be set for a database system, an individual database, or a session or query.
For information about GPORCA, see About GPORCA in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, session, reload |
When GPORCA is enabled (the default), this parameter controls the display of table column information about columns with missing statistics for a query. The default value is true
, display the column information to the client. When the value is false
, the information is not sent to the client.
The information is displayed during query execution, or with the EXPLAIN
or EXPLAIN ANALYZE
commands.
The parameter can be set for a database system, an individual database, or a session.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, session, reload |
When GPORCA is enabled (the default), this parameter enables logging of GPORCA query optimization statistics for various optimization stages for a query. The default value is off, do not log optimization statistics. To log the optimization statistics, this parameter must be set to on and the parameter client_min_messages must be set to log.
set optimizer_print_optimization_stats = on;
set client_min_messages = 'log';
The information is logged during query execution, or with the EXPLAIN
or EXPLAIN ANALYZE
commands.
This parameter can be set for a database system, an individual database, or a session or query.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
When GPORCA is enabled (the default), optimizer_skew_factor
controls skew ratio computation.
The default value is 0
, skew computation is turned off for GPORCA. To enable skew computation, set optimizer_skew_factor
to a value between 1
and 100
, inclusive.
The larger the optimizer_skew_factor
, the larger the cost that GPORCA assigns to redistributed hash join, such that GPORCA favors more a broadcast hash join.
The parameter can be set for a database system, an individual database, or a session or query.
Value Range | Default | Set Classifications |
---|---|---|
integer 0-100 | 0 | master, session, reload |
When GPORCA is enabled (the default), optimizer_sort_factor
controls the cost factor to apply to sorting operations during query optimization. The default value 1
specifies the default sort cost factor. The value is a ratio of increase or decrease from the default factor. For example, a value of 2.0
sets the cost factor at twice the default, and a value of 0.5
sets the factor at half the default.
The parameter can be set for a database system, an individual database, or a session or query.
Value Range | Default | Set Classifications |
---|---|---|
Decimal > 0 | 1 | master, session, reload |
When GPORCA is enabled (the default) and this parameter is true
(the default), GPORCA uses Greenplum Database memory management when running queries. When set to false
, GPORCA uses GPORCA-specific memory management. Greenplum Database memory management allows for faster optimization, reduced memory usage during optimization, and improves GPORCA support of vmem limits when compared to GPORCA-specific memory management.
For information about GPORCA, see About GPORCA in the Greenplum Database Administrator Guide.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, system, restart |
When GPORCA is enabled (the default), this parameter controls the maximum number of bindings per transform that GPORCA produces per group expression. Setting this parameter limits the number of alternatives that GPORCA creates, in many cases reducing the optimization time and overall memory usage of queries that include deeply nested expressions.
The default value is 0
, GPORCA produces an unlimited set of bindings.
Value Range | Default | Set Classifications |
---|---|---|
0 - INT_MAX | 0 | master, session, reload |
When a password is specified in CREATE USER or ALTER USER without writing either ENCRYPTED or UNENCRYPTED, this option determines whether the password is to be encrypted.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Specifies the cryptographic hash algorithm that is used when storing an encrypted Greenplum Database user password. The default algorithm is MD5.
For information about setting the password hash algorithm to protect user passwords, see Protecting Passwords in Greenplum Database.
Value Range | Default | Set Classifications |
---|---|---|
MD5 SHA-256 SCRAM-SHA-256 |
MD5 | master, session, reload, superuser |
Prepared statements (either explicitly prepared or implicitly generated, for example by PL/pgSQL) can be run using custom or generic plans. Custom plans are created for each execution using its specific set of parameter values, while generic plans do not rely on the parameter values and can be re-used across executions. The use of a generic plan saves planning time, but if the ideal plan depends strongly on the parameter values, then a generic plan might be inefficient. The choice between these options is normally made automatically, but it can be overridden by setting the plan_cache_mode
parameter. If the prepared statement has no parameters, a generic plan is always used.
The allowed values are auto
(the default), force_custom_plan
and force_generic_plan
. This setting is considered when a cached plan is to be run, not when it is prepared. For more information see PREPARE.
The parameter can be set for a database system, an individual database, a session, or a query.
Value Range | Default | Set Classifications |
---|---|---|
auto force_custom_plan force_generic_plan |
auto | master, session, reload |
A colon (:
) separated list of jar files or directories containing jar files needed for PL/Java functions. The full path to the jar file or directory must be specified, except the path can be omitted for jar files in the $GPHOME/lib/postgresql/java
directory. The jar files must be installed in the same locations on all Greenplum hosts and readable by the gpadmin
user.
The pljava_classpath
parameter is used to assemble the PL/Java classpath at the beginning of each user session. Jar files added after a session has started are not available to that session.
If the full path to a jar file is specified in pljava_classpath
it is added to the PL/Java classpath. When a directory is specified, any jar files the directory contains are added to the PL/Java classpath. The search does not descend into subdirectories of the specified directories. If the name of a jar file is included in pljava_classpath
with no path, the jar file must be in the $GPHOME/lib/postgresql/java
directory.
NotePerformance can be affected if there are many directories to search or a large number of jar files.
If pljava_classpath_insecure is false
, setting the pljava_classpath
parameter requires superuser privilege. Setting the classpath in SQL code will fail when the code is run by a user without superuser privilege. The pljava_classpath
parameter must have been set previously by a superuser or in the postgresql.conf
file. Changing the classpath in the postgresql.conf
file requires a reload (gpstop -u
).
Value Range | Default | Set Classifications |
---|---|---|
string | master, session, reload, superuser |
Controls whether the server configuration parameter pljava_classpath can be set by a user without Greenplum Database superuser privileges. When true
, pljava_classpath
can be set by a regular user. Otherwise, pljava_classpath can be set only by a database superuser. The default is false
.
CautionEnabling this parameter exposes a security risk by giving non-administrator database users the ability to run unauthorized Java methods.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | master, session, reload, superuser |
Sets the size in KB of the JRE MRU (Most Recently Used) cache for prepared statements.
Value Range | Default | Set Classifications |
---|---|---|
number of kilobytes | 10 | master, system, reload, superuser |
If true, lingering savepoints used in PL/Java functions will be released on function exit. If false, savepoints will be rolled back.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, system, reload, superuser |
Defines the startup options for the Java VM. The default value is an empty string ("").
Value Range | Default | Set Classifications |
---|---|---|
string | master, system, reload, superuser |
The database listener port for a Greenplum instance. The master and each segment has its own port. Port numbers for the Greenplum system must also be changed in the gp_segment_configuration
catalog. You must shut down your Greenplum Database system before changing port numbers.
Value Range | Default | Set Classifications |
---|---|---|
any valid port number | 5432 | local, system, restart |
Ensures that all identifiers are quoted, even if they are not keywords, when the database generates SQL. See also the --quote-all-identifiers
option of pg_dump and pg_dumpall.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | false | local, session, reload |
Sets the estimate of the cost of a nonsequentially fetched disk page for the Postgres Planner. This is measured as a multiple of the cost of a sequential page fetch. A higher value makes it more likely a sequential scan will be used, a lower value makes it more likely an index scan will be used.
Value Range | Default | Set Classifications |
---|---|---|
floating point | 100 | master, session, reload |
When an SQL query reads from an external table, the parameter value specifies the amount of time in seconds that Greenplum Database waits before cancelling the query when data stops being returned from the external table.
The default value of 0, specifies no time out. Greenplum Database does not cancel the query.
If queries that use gpfdist run a long time and then return the error "intermittent network connectivity issues", you can specify a value for readable_external_table_timeout
. If no data is returned by gpfdist for the specified length of time, Greenplum Database cancels the query.
Value Range | Default | Set Classifications |
---|---|---|
integer >= 0 | 0 | master, system, reload |
For Greenplum Database master mirroring, controls updates to the active master. If the number of WAL segment files that have not been processed by the walsender
exceeds this value, Greenplum Database updates the active master.
If the number of segment files does not exceed the value, Greenplum Database blocks updates to the to allow the walsender
process the files. If all WAL segments have been processed, the active master is updated.
Value Range | Default | Set Classifications |
---|---|---|
0 - 64 | 1 | master, system, reload, superuser |
For Greenplum Database master mirroring, sets the maximum time in milliseconds that the walsender
process on the active master waits for a status message from the walreceiver
process on the standby master. If a message is not received, the walsender
logs an error message.
The wal_receiver_status_interval controls the interval between walreceiver
status messages.
Value Range | Default | Set Classifications |
---|---|---|
0 - INT_MAX | 60000 ms (60 seconds) | master, system, reload, superuser |
NoteThe
resource_cleanup_gangs_on_wait
server configuration parameter is enforced only when resource queue-based resource management is active.
If a statement is submitted through a resource queue, clean up any idle query executor worker processes before taking a lock on the resource queue.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
NoteThe
resource_select_only
server configuration parameter is enforced only when resource queue-based resource management is active.
Sets the types of queries managed by resource queues. If set to on, then SELECT
, SELECT INTO
, CREATE TABLE AS SELECT
, and DECLARE CURSOR
commands are evaluated. If set to off INSERT
, UPDATE
, and DELETE
commands will be evaluated as well.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, system, restart |
For queries that are managed by resource queues or resource groups, this parameter determines when Greenplum Database terminates running queries based on the amount of memory the queries are using. A value of 100 deactivates the automatic termination of queries based on the percentage of memory that is utilized.
Either the resource queue or the resource group management scheme can be active in Greenplum Database; both schemes cannot be active at the same time. The server configuration parameter gp_resource_manager controls which scheme is active.
When resource queues are enabled - This parameter sets the percent of utilized Greenplum Database vmem memory that triggers the termination of queries. If the percentage of vmem memory that is utilized for a Greenplum Database segment exceeds the specified value, Greenplum Database terminates queries managed by resource queues based on memory usage, starting with the query consuming the largest amount of memory. Queries are terminated until the percentage of utilized vmem is below the specified percentage.
Specify the maximum vmem value for active Greenplum Database segment instances with the server configuration parameter gp_vmem_protect_limit.
For example, if vmem memory is set to 10GB, and this parameter is 90 (90%), Greenplum Database starts terminating queries when the utilized vmem memory exceeds 9 GB.
For information about resource queues, see Using Resource Queues.
When resource groups are enabled - This parameter sets the percent of utilized resource group global shared memory that triggers the termination of queries that are managed by resource groups that are configured to use the vmtracker
memory auditor, such as admin_group
and default_group
. For information about memory auditors, see Memory Auditor.
Resource groups have a global shared memory pool when the sum of the MEMORY_LIMIT
attribute values configured for all resource groups is less than 100. For example, if you have 3 resource groups configured with memory_limit
values of 10 , 20, and 30, then global shared memory is 40% = 100% - (10% + 20% + 30%). See Global Shared Memory.
If the percentage of utilized global shared memory exceeds the specified value, Greenplum Database terminates queries based on memory usage, selecting from queries managed by the resource groups that are configured to use the vmtracker
memory auditor. Greenplum Database starts with the query consuming the largest amount of memory. Queries are terminated until the percentage of utilized global shared memory is below the specified percentage.
For example, if global shared memory is 10GB, and this parameter is 90 (90%), Greenplum Database starts terminating queries when the utilized global shared memory exceeds 9 GB.
For information about resource groups, see Using Resource Groups.
Value Range | Default | Set Classifications |
---|---|---|
percentage (integer) | 90 | local, system, restart |
Specifies the order in which schemas are searched when an object is referenced by a simple name with no schema component. When there are objects of identical names in different schemas, the one found first in the search path is used. The system catalog schema, pg_catalog, is always searched, whether it is mentioned in the path or not. When objects are created without specifying a particular target schema, they will be placed in the first schema listed in the search path. The current effective value of the search path can be examined via the SQL function current_schema(). current_schema() shows how the requests appearing in search_path were resolved.
Value Range | Default | Set Classifications |
---|---|---|
a comma-separated list of schema names | $user,public | master, session, reload |
For the Postgres Planner, sets the estimate of the cost of a disk page fetch that is part of a series of sequential fetches.
Value Range | Default | Set Classifications |
---|---|---|
floating point | 1 | master, session, reload |
Reports the database encoding (character set). It is determined when the Greenplum Database array is initialized. Ordinarily, clients need only be concerned with the value of client_encoding.
Value Range | Default | Set Classifications |
---|---|---|
<system dependent> | UTF8 | read only |
Reports the version of PostgreSQL that this release of Greenplum Database is based on.
Value Range | Default | Set Classifications |
---|---|---|
string | 9.4.20 | read only |
Reports the version of PostgreSQL that this release of Greenplum Database is based on as an integer.
Value Range | Default | Set Classifications |
---|---|---|
integer | 90420 | read only |
Sets the amount of memory a Greenplum Database segment instance uses for shared memory buffers. This setting must be at least 128KB and at least 16KB times max_connections.
Each Greenplum Database segment instance calculates and attempts to allocate certain amount of shared memory based on the segment configuration. The value of shared_buffers
is significant portion of this shared memory calculation, but is not all it. When setting shared_buffers
, the values for the operating system parameters SHMMAX
or SHMALL
might also need to be adjusted.
The operating system parameter SHMMAX
specifies maximum size of a single shared memory allocation. The value of SHMMAX
must be greater than this value:
`shared_buffers` + <other_seg_shmem>
The value of other_seg_shmem
is the portion the Greenplum Database shared memory calculation that is not accounted for by the shared_buffers
value. The other_seg_shmem
value will vary based on the segment configuration.
With the default Greenplum Database parameter values, the value for other_seg_shmem
is approximately 111MB for Greenplum Database segments and approximately 79MB for the Greenplum Database master.
The operating system parameter SHMALL
specifies the maximum amount of shared memory on the host. The value of SHMALL
must be greater than this value:
(<num_instances_per_host> * ( `shared_buffers` + <other_seg_shmem> )) + <other_app_shared_mem>
The value of other_app_shared_mem
is the amount of shared memory that is used by other applications and processes on the host.
When shared memory allocation errors occur, possible ways to resolve shared memory allocation issues are to increase SHMMAX
or SHMALL
, or decrease shared_buffers
or max_connections
.
See the Greenplum Database Installation Guide for information about the Greenplum Database values for the parameters SHMMAX
and SHMALL
.
Value Range | Default | Set Classifications |
---|---|---|
integer > 16K * max_connections | 125MB | local, system, restart |
A comma-separated list of shared libraries that are to be preloaded at server start. PostgreSQL procedural language libraries can be preloaded in this way, typically by using the syntax '$libdir/plXXX
' where XXX is pgsql, perl, tcl, or python. By preloading a shared library, the library startup time is avoided when the library is first used. If a specified library is not found, the server will fail to start.
NoteWhen you add a library to
shared_preload_libraries
, be sure to retain any previous setting of the parameter.
Value Range | Default | Set Classifications |
---|---|---|
local, system, restart |
Enables SSL connections.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, system, restart |
Specifies a list of SSL ciphers that are allowed to be used on secure connections. ssl_ciphers
overrides any ciphers string specified in /etc/openssl.cnf
. The default value ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH
enables all ciphers except for ADH, LOW, EXP, and MD5 ciphers, and prioritizes ciphers by their strength.
NoteWith TLS 1.2 some ciphers in MEDIUM and HIGH strength still use NULL encryption (no encryption for transport), which the default
ssl_ciphers
string allows. To bypass NULL ciphers with TLS 1.2 use a string such asTLSv1.2:!eNULL:!aNULL
.
See the openssl manual page for a list of supported ciphers.
Value Range | Default | Set Classifications |
---|---|---|
string | ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH | master, system, restart |
Determines whether ordinary string literals ('...') treat backslashes literally, as specified in the SQL standard. The default value is on. Turn this parameter off to treat backslashes in string literals as escape characters instead of literal backslashes. Applications may check this parameter to determine how string literals are processed. The presence of this parameter can also be taken as an indication that the escape string syntax (E'...') is supported.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | master, session, reload |
Allocates segment host memory per query. The amount of memory allocated with this parameter cannot exceed max_statement_mem or the memory limit on the resource queue or resource group through which the query was submitted. If additional memory is required for a query, temporary spill files on disk are used.
If you are using resource groups to control resource allocation in your Greenplum Database cluster:
Greenplum Database uses statement_mem
to control query memory usage when the resource group MEMORY_SPILL_RATIO
is set to 0.
You can use the following calculation to estimate a reasonable statement_mem
value:
rg_perseg_mem = ((RAM * (vm.overcommit_ratio / 100) + SWAP) * gp_resource_group_memory_limit) / num_active_primary_segments
statement_mem = rg_perseg_mem / max_expected_concurrent_queries
If you are using resource queues to control resource allocation in your Greenplum Database cluster:
When gp_resqueue_memory_policy =auto, statement_mem
and resource queue memory limits control query memory usage.
You can use the following calculation to estimate a reasonable statement_mem
value for a wide variety of situations:
( <gp_vmem_protect_limit>GB * .9 ) / <max_expected_concurrent_queries>
For example, with a gp_vmem_protect_limit
set to 8192MB (8GB) and assuming a maximum of 40 concurrent queries with a 10% buffer, you would use the following calculation to determine the statement_mem
value:
(8GB * .9) / 40 = .18GB = 184MB
When changing both max_statement_mem
and statement_mem
, max_statement_mem
must be changed first, or listed first in the postgresql.conf file.
Value Range | Default | Set Classifications |
---|---|---|
number of kilobytes | 128MB | master, session, reload |
Abort any statement that takes over the specified number of milliseconds. 0 turns off the limitation.
Value Range | Default | Set Classifications |
---|---|---|
number of milliseconds | 0 | master, session, reload |
NoteThe
stats_queue_level
server configuration parameter is enforced only when resource queue-based resource management is active.
Collects resource queue statistics on database activity.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
Determines the number of connection slots that are reserved for Greenplum Database superusers.
Value Range | Default | Set Classifications |
---|---|---|
integer < max_connections | 10 | local, system, restart |
How many keepalives may be lost before the connection is considered dead. A value of 0 uses the system default. If TCP_KEEPCNT is not supported, this parameter must be 0.
Use this parameter for all connections that are not between a primary and mirror segment.
Value Range | Default | Set Classifications |
---|---|---|
number of lost keepalives | 0 | local, system, restart |
Number of seconds between sending keepalives on an otherwise idle connection. A value of 0 uses the system default. If TCP_KEEPIDLE is not supported, this parameter must be 0.
Use this parameter for all connections that are not between a primary and mirror segment.
Value Range | Default | Set Classifications |
---|---|---|
number of seconds | 0 | local, system, restart |
How many seconds to wait for a response to a keepalive before retransmitting. A value of 0 uses the system default. If TCP_KEEPINTVL is not supported, this parameter must be 0.
Use this parameter for all connections that are not between a primary and mirror segment.
Value Range | Default | Set Classifications |
---|---|---|
number of seconds | 0 | local, system, restart |
Sets the maximum memory, in blocks, to allow for temporary buffers by each database session. These are session-local buffers used only for access to temporary tables. The setting can be changed within individual sessions, but only up until the first use of temporary tables within a session. The cost of setting a large value in sessions that do not actually need a lot of temporary buffers is only a buffer descriptor for each block, or about 64 bytes, per increment. However if a buffer is actually used, an additional 32768 bytes will be consumed.
You can set this parameter to the number of 32K blocks (for example, 1024
to allow 32MB for buffers), or specify the maximum amount of memory to allow (for example '48MB'
for 1536 blocks). The gpconfig
utility and SHOW
command report the maximum amount of memory allowed for temporary buffers.
Value Range | Default | Set Classifications |
---|---|---|
integer | 1024 (32MB) | master, session, reload |
Specifies tablespaces in which to create temporary objects (temp tables and indexes on temp tables) when a CREATE
command does not explicitly specify a tablespace. These tablespaces can also include temporary files for purposes such as large data set sorting.
The value is a comma-separated list of tablespace names. When the list contains more than one tablespace name, Greenplum chooses a random list member each time it creates a temporary object. An exception applies within a transaction, where successively created temporary objects are placed in successive tablespaces from the list. If the selected element of the list is an empty string, Greenplum automatically uses the default tablespace of the current database instead.
When setting temp_tablespaces
interactively, avoid specifying a nonexistent tablespace, or a tablespace for which the user does have CREATE
privileges. For non-superusers, a superuser must GRANT
them the CREATE
privilege on the temp tablespace.. When using a previously set value (for example a value in postgresql.conf
), nonexistent tablespaces are ignored, as are tablespaces for which the user lacks CREATE
privilege.
The default value is an empty string, which results in all temporary objects being created in the default tablespace of the current database.
See also default_tablespace.
Value Range | Default | Set Classifications |
---|---|---|
one or more tablespace names | unset | master, session, reload |
Sets the time zone for displaying and interpreting time stamps. The default is to use whatever the system environment specifies as the time zone. See Date/Time Keywords in the PostgreSQL documentation.
Value Range | Default | Set Classifications |
---|---|---|
time zone abbreviation | local, restart |
Sets the collection of time zone abbreviations that will be accepted by the server for date time input. The default is Default
, which is a collection that works in most of the world. Australia
and India
, and other collections can be defined for a particular installation. Possible values are names of configuration files stored in $GPHOME/share/postgresql/timezonesets/
.
To configure Greenplum Database to use a custom collection of timezones, copy the file that contains the timezone definitions to the directory $GPHOME/share/postgresql/timezonesets/
on the Greenplum Database master and segment hosts. Then set value of the server configuration parameter timezone_abbreviations
to the file. For example, to use a file custom
that contains the default timezones and the WIB (Waktu Indonesia Barat) timezone.
Default
from the directory $GPHOME/share/postgresql/timezonesets/
the file custom
. Add the WIB timezone information from the file Asia.txt
to the custom
.custom
to the directory $GPHOME/share/postgresql/timezonesets/
on the Greenplum Database master and segment hosts.timezone_abbreviations
to custom
.gpstop -u
).Value Range | Default | Set Classifications |
---|---|---|
string | Default | master, session, reload |
Enables the collection of information on the currently executing command of each session, along with the time when that command began execution. The default value is true
. Only superusers can change this setting. See the pg_stat_activity view.
NoteEven when enabled, this information is not visible to all users, only to superusers and the user owning the session being reported on, so it should not represent a security risk.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, session, reload, superuser |
Sets the maximum length limit for the query text stored in query
column of the system catalog view pg_stat_activity. The minimum length is 1024 characters.
Value Range | Default | Set Classifications |
---|---|---|
integer | 1024 | local, system, restart |
Enables the collection of information on the currently executing command of each session, along with the time at which that command began execution.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, session, reload, superuser |
Sets the current transaction's isolation level.
Value Range | Default | Set Classifications |
---|---|---|
read committed serializable |
read committed | master, session, reload |
Sets the current transaction's read-only status.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
When on, expressions of the form expr = NULL (or NULL = expr) are treated as expr IS NULL, that is, they return true if expr evaluates to the null value, and false otherwise. The correct SQL-spec-compliant behavior of expr = NULL is to always return null (unknown).
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
Specifies the directory of the UNIX-domain socket on which the server is to listen for connections from client applications. Multiple sockets can be created by listing multiple directories separated by commas.
ImportantDo not change the value of this parameter. The default location is required for Greenplum Database utilities.
Value Range | Default | Set Classifications |
---|---|---|
directory path | 0777 | local, system, restart |
Sets the owning group of the UNIX-domain socket. By default this is an empty string, which uses the default group for the current user.
Value Range | Default | Set Classifications |
---|---|---|
UNIX group name | unset | local, system, restart |
Sets the access permissions of the UNIX-domain socket. UNIX-domain sockets use the usual UNIX file system permission set. Note that for a UNIX-domain socket, only write permission matters.
Value Range | Default | Set Classifications |
---|---|---|
numeric UNIX file permission mode (as accepted by the chmod or umask commands) | 0777 | local, system, restart |
Enables updating of the process title every time a new SQL command is received by the server. The process title is typically viewed by the ps
command.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | on | local, session, reload |
The length of time that the process will sleep when the cost limit has been exceeded. 0 deactivates the cost-based vacuum delay feature.
Value Range | Default | Set Classifications |
---|---|---|
milliseconds < 0 (in multiples of 10) | 0 | local, session, reload |
The accumulated cost that will cause the vacuuming process to sleep.
Value Range | Default | Set Classifications |
---|---|---|
integer > 0 | 200 | local, session, reload |
The estimated cost charged when vacuum modifies a block that was previously clean. It represents the extra I/O required to flush the dirty block out to disk again.
Value Range | Default | Set Classifications |
---|---|---|
integer > 0 | 20 | local, session, reload |
The estimated cost for vacuuming a buffer found in the shared buffer cache. It represents the cost to lock the buffer pool, lookup the shared hash table and scan the content of the page.
Value Range | Default | Set Classifications |
---|---|---|
integer > 0 | 1 | local, session, reload |
The estimated cost for vacuuming a buffer that has to be read from disk. This represents the effort to lock the buffer pool, lookup the shared hash table, read the desired block in from the disk and scan its content.
Value Range | Default | Set Classifications |
---|---|---|
integer > 0 | 10 | local, session, reload |
Specifies the cutoff age (in transactions) that VACUUM
should use to decide whether to replace transaction IDs with FrozenXID while scanning a table.
For information about VACUUM
and transaction ID management, see "Managing Data" in the Greenplum Database Administrator Guide and the PostgreSQL documentation.
Value Range | Default | Set Classifications |
---|---|---|
integer 0-100000000000 | 50000000 | local, session, reload |
Enables a test that validates the free tuple ID (TID) list. The list is maintained and used by Greenplum Database. Greenplum Database determines the validity of the free TID list by ensuring the previous free TID of the current free tuple is a valid free tuple. The default value is true
, enable the test.
If Greenplum Database detects a corruption in the free TID list, the free TID list is rebuilt, a warning is logged, and a warning is returned by queries for which the check failed. Greenplum Database attempts to run the queries.
NoteIf a warning is returned, please contact VMware Support.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, session, reload |
When a Greenplum Database external table is defined with the gpfdists
protocol to use SSL security, this parameter controls whether SSL certificate authentication is enabled.
Regardless of the setting of this server configuration parameter, Greenplum Database always encrypts data that you read from or write to an external table that specifies the gpfdists
protocol.
The default is true
, SSL authentication is enabled when Greenplum Database communicates with the gpfdist
utility to either read data from or write data to an external data source.
The value false
deactivates SSL certificate authentication. These SSL exceptions are ignored:
gpfdist
is not trusted by Greenplum Database.gpfdist
.You can set the value to false
to deactivate authentication when testing the communication between the Greenplum Database external table and the gpfdist
utility that is serving the external data.
CautionDeactivating SSL certificate authentication exposes a security risk by not validating the
gpfdists
SSL certificate.
For information about the gpfdists
protocol, see gpfdists:// Protocol. For information about running the gpfdist
utility, see gpfdist.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | true | master, session, reload |
Enables checking for interrupts before reserving vmem memory for a query during Greenplum Database query execution. Before reserving further vmem for a query, check if the current session for the query has a pending query cancellation or other pending interrupts. This ensures more responsive interrupt processing, including query cancellation requests. The default is off
.
Value Range | Default | Set Classifications |
---|---|---|
Boolean | off | master, session, reload |
When Greenplum Database segment mirroring is enabled, specifies the maximum amount of Write-Ahead Logging (WAL)-based records (in KB) written by a transaction on the primary segment instance before the records are written to the mirror segment instance for replication. As the default, Greenplum Database writes the records to the mirror segment instance when a checkpoint occurs or the wait_for_replication_threshold
value is reached.
A value of 0 deactivates the check for the amount of records. The records are written to the mirror segment instance only after a checkpoint occurs.
If you set the value to 0, database performance issues might occur under heavy loads that perform long transactions that do not perform a checkpoint operation.
Value Range | Default | Set Classifications |
---|---|---|
0 - MAX-INT / 1024 | 1024 | master, system, reload |
For Greenplum Database master mirroring, sets the maximum number of processed WAL segment files that are saved by the by the active Greenplum Database master if a checkpoint operation occurs.
The segment files are used to synchronize the active master on the standby master.
Value Range | Default | Set Classifications |
---|---|---|
integer | 5 | master, system, reload, superuser |
For Greenplum Database master mirroring, sets the interval in seconds between walreceiver
process status messages that are sent to the active master. Under heavy loads, the time might be longer.
The value of wal_sender_timeout controls the time that the walsender
process waits for a walreceiver
message.
Value Range | Default | Set Classifications |
---|---|---|
integer 0- INT_MAX/1000 | 10 sec | master, system, reload, superuser |
When Greenplum Database segment mirroring and archiving is enabled, specifies the interval in milliseconds at which the walsender
process on the primary segment sends archival status messages to the walreceiver
process of its corresponding mirror segment.
A value of 0 deactivates this feature.
Value Range | Default | Set Classifications |
---|---|---|
0 - INT_MAX |
10000 ms (10 seconds) | local, system, reload |
Sets the maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files. If this value is specified without units, it is taken as kilobytes. The default value is 32 MB. Note that for a complex query, several sort or hash operations might be running in parallel; each operation will be allowed to use as much memory as this value specifies before it starts to write data into temporary files. In addition, several running sessions may be performing such operations concurrently. Therefore, the total memory used could be many times the value of work_mem
; keep this fact in mind when choosing the value for this parameter. Sort operations are used for ORDER BY
, DISTINCT
, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN
subqueries. Apart from sorting and hashing, bitmap index scans also rely on work_mem
. Operations relying on tuplestores such as function scans, CTEs, PL/pgSQL and administration UDFs also rely on work_mem
.
Apart from assigning memory to specific execution operators, setting work_mem
also influences certain query plans over others, when the Postgres-based planner is used as the optimizer.
work_mem
is a distinct memory management concept that does not interact with resource queue or resource group memory controls, which are imposed at the query level.
Value Range | Default | Set Classifications |
---|---|---|
number of kilobytes | 32MB | coordinator, session, reload |
Size of the buffer that Greenplum Database uses for network communication, such as the gpfdist
utility and external web tables (that use http). Valid units are KB
(as in 128KB
), MB
, GB
, and TB
. Greenplum Database stores data in the buffer before writing the data out. For information about gpfdist
, see the Greenplum Database Utility Guide.
Value Range | Default | Set Classifications |
---|---|---|
integer 32 - 131072 (32KB - 128MB) | 64 | local, session, reload |
The number of transaction IDs prior to the ID where transaction ID wraparound occurs. When this limit is reached, Greenplum Database stops creating new transactions to avoid data loss due to transaction ID wraparound.
Value Range | Default | Set Classifications |
---|---|---|
integer 10000000 - INT_MAX | 100000000 | local, system, restart |
The number of transaction IDs prior to the limit specified by xid_stop_limit. When Greenplum Database reaches this limit, it issues a warning to perform a VACUUM operation to avoid data loss due to transaction ID wraparound.
Value Range | Default | Set Classifications |
---|---|---|
integer 10000000 - INT_MAX | 500000000 | local, system, restart |
Specifies how binary values are encoded in XML data. For example, when bytea
values are converted to XML. The binary data can be converted to either base64 encoding or hexadecimal encoding. The default is base64.
The parameter can be set for a database system, an individual database, or a session.
Value Range | Default | Set Classifications |
---|---|---|
base64 hex |
base64 | master, session, reload |
Specifies whether XML data is to be considered as an XML document (document
) or XML content fragment (content
) for operations that perform implicit parsing and serialization. The default is content
.
This parameter affects the validation performed by xml_is_well_formed()
. If the value is document
, the function checks for a well-formed XML document. If the value is content
, the function checks for a well-formed XML content fragment.
NoteAn XML document that contains a document type declaration (DTD) is not considered a valid XML content fragment. If
xmloption
set tocontent
, XML that contains a DTD is not considered valid XML.
To cast a character string that contains a DTD to the xml
data type, use the xmlparse
function with the document
keyword, or change the xmloption
value to document
.
The parameter can be set for a database system, an individual database, or a session. The SQL command to set this option for a session is also available in Greenplum Database.
SET XML OPTION { DOCUMENT | CONTENT }
Value Range | Default | Set Classifications |
---|---|---|
document content |
content | master, session, reload |