You use resource groups to set and enforce CPU, memory, and concurrent transaction limits in Greenplum Database. Once you define a resource group, you assign the group to one or more Greenplum Database roles in order to control the resources used by them.
When you assign a resource group to a role, the resource limits that you define for the group apply to all of the roles to which you assign the group. For example, the memory limit for a resource group identifies the maximum memory usage for all running transactions submitted by Greenplum Database users in all roles to which you assign the group.
Greenplum Database uses Linux-based control groups for CPU resource management, and Runaway Detector for statistics, tracking and management of memory.
When using resource groups to control resources like CPU cores, review the Hyperthreading note in Hardware and Network.
Parent topic: Managing Resources
The most common application for resource groups is to manage the number of active queries that different roles may run concurrently in your Greenplum Database cluster. You can also manage the amount of CPU and memory resources that Greenplum allocates to each query.
When a user runs a query, Greenplum Database evaluates the query against a set of limits defined for the resource group. Greenplum Database runs the query immediately if the group's resource limits have not yet been reached and the query does not cause the group to exceed the concurrent transaction limit. If these conditions are not met, Greenplum Database queues the query. For example, if the maximum number of concurrent transactions for the resource group has already been reached, a subsequent query is queued and must wait until other queries complete before it runs. Greenplum Database may also run a pending query when the resource group's concurrency and memory limits are altered to large enough values.
Within a resource group for roles, transactions are evaluated on a first in, first out basis. Greenplum Database periodically assesses the active workload of the system, reallocating resources and starting/queuing jobs as necessary.
When you create a resource group, you provide a set of limits that determine the amount of CPU and memory resources available to the group. The following table lists the available limits for resource groups:
Limit Type | Description | Value Range | Default |
---|---|---|---|
CONCURRENCY | The maximum number of concurrent transactions, including active and idle transactions, that are permitted in the resource group. | 0-max_connections | 20 |
CPU_MAX_PERCENT | The maximum percentage of CPU resources the group can use. | [0-100] | -1 (not set) |
CPU_WEIGHT | The scheduling priority of the resource group. | [1-500] | 100 |
CPUSET | The specific CPU logical core (or logical thread in hyperthreading) reserved for this resource group. | It depends on system core configuration | -1 |
MEMORY_LIMIT | The memory limit value specified for the resource group. | Integer (MB) | -1 (not set) |
MIN_COST | The minimum cost of a query plan to be included in the resource group. | Integer | 0 |
NoteResource limits are not enforced on
SET
,RESET
, andSHOW
commands.
The CONCURRENCY
limit controls the maximum number of concurrent transactions permitted for a resource group.
Each resource group is logically divided into a fixed number of slots equal to the CONCURRENCY
limit. Greenplum Database allocates these slots an equal, fixed percentage of memory resources.
The default CONCURRENCY
limit value for a resource group for roles is 20. A value of 0 means that no query is allowed to run for this resource group.
Greenplum Database queues any transactions submitted after the resource group reaches its CONCURRENCY
limit. When a running transaction completes, Greenplum Database un-queues and runs the earliest queued transaction if sufficient memory resources exist. Note that if a transaction is in idle state, even if no statement is running, the concurrency slot is still in use.
You can set the server configuration parameter gp_resource_group_queuing_timeout to specify the amount of time a transaction remains in the queue before Greenplum Database cancels the transaction. The default timeout is zero, Greenplum queues transactions indefinitely.
A query bypasses the resource group concurrency limit if you set the server configuration parameter gp_resource_group_bypass. This parameter enables or disables the concurrent transaction limit for the resource group so a query can run immediately. The default value is false, which enforces the limit of the CONCURRENCY
limit. You may only set this parameter for a single session, not within a transaction or a function. If you set gp_resource_group_bypass
to true, the query no longer enforces the CPU or memory limits assigned to its resource group. Instead, the memory quota assigned to this query is statement_mem
per query. If there is not enough memory to satisfy the memory allocation request, the query will fail.
You may bypass queries that only use catalog tables, such as the database Graphical User Interface (GUI) client, which runs catalog queries to obtain metadata. If the server configuration parameter gp_resource_group_bypass_catalog_query is set to true (the default), Greenplum Database's resource group scheduler bypasses all queries that read exclusively from system catalogs, or queries that contain in their query text pg_catalog
schema tables only. These queries are automatically unassigned from its current resource group; they do not enforce the limits of the resource group and do not account for resource usage. The query resources are assigned out of the resource groups and run immediately. The memory quota is statement_mem
per the query.
You may bypass direct dispatch queries with the server configuration parameter gp_resource_group_bypass_direct_dispatch. A direct dispatch query is a special type of query that only requires a single segment to participate in the execution. In order to improve efficiency, Greenplum optimizes this type of query, using direct dispatch optimization. The system sends the query plan to the execution of a single segment that needs to execute the plan, instead of sending it to all segments for execution. If you set gp_resource_group_bypass_direct_dispatch
to true, the query no longer enforces the CPU or memory limits assigned to its resource group, so it runs immediately. Instead, the memory quota assigned to this query is statement_mem
per query. If there is not enough memory to satisfy the memory allocation request, the query will fail. You may only set this parameter for a single session, not within a transaction or a function.
Queries whose plan cost is less than the limit MIN_COST
are automatically unassigned from their resource group and do not enforce any of its limits. The resources used by the query do not account for the resources of the resource group. The query has a memory quota of statement_mem
. The default value of MIN_COST
is 0.
Greenplum Database leverages Linux control groups to implement CPU resource management. Greenplum Database allocates CPU resources in two ways:
When you set one of the allocation modes for a resource group, Greenplum Database deactivates the other allocation mode. You may employ both modes of CPU resource allocation simultaneously for different resource groups on the same Greenplum Database cluster. You may also change the CPU resource allocation mode for a resource group at runtime.
Greenplum Database uses the server configuration parameter gp_resource_group_cpu_limit to identify the maximum percentage of system CPU resources to allocate to resource groups on each Greenplum Database segment node. The remaining unreserved CPU resources are used for the operating system kernel and Greenplum Database daemons. The amount of CPU available to Greenplum Database queries per host is then divided equally among each segment on the Greenplum node.
NoteThe default
gp_resource_group_cpu_limit
value may not leave sufficient CPU resources if you are running other workloads on your Greenplum Database cluster nodes, so be sure to adjust this server configuration parameter accordingly.
CautionAvoid setting
gp_resource_group_cpu_limit
to a value higher than .9. Doing so may result in high workload queries taking near all CPU resources, potentially starving Greenplum Database auxiliary processes.
You identify the CPU cores that you want to reserve for a resource group with the CPUSET
property. The CPU cores that you specify must be available in the system and cannot overlap with any CPU cores that you reserved for other resource groups. Although Greenplum Database uses the cores that you assign to a resource group exclusively for that group, note that those CPU cores may also be used by non-Greenplum processes in the system. When you configure CPUSET
for a resource group, Greenplum Database deactivates CPU_MAX_PERCENT
and CPU_WEIGHT
for the group and sets their value to -1.
Specify CPU cores separately for the coordinator host and segment hosts, separated by a semicolon. Use a comma-separated list of single core numbers or number intervals when you configure cores for CPUSET
. You must enclose the core numbers/intervals in single quotes, for example, '1;1,3-4' uses core 1 on the coordinator host, and cores 1, 3, and 4 on segment hosts.
When you assign CPU cores to CPUSET
groups, consider the following:
CPUSET
uses the specified cores exclusively. If there are no running queries in the group, the reserved cores are idle and cannot be used by queries in other resource groups. Consider minimizing the number of CPUSET
groups to avoid wasting system CPU resources.admin_group
and default_group
require at least one CPU core. When all CPU cores are reserved, Greenplum Database assigns CPU core 0 to these default groups. In this situation, the resource group to which you assigned CPU core 0 shares the core with admin_group
and default_group
.CPUSET
resource groups, the groups are automatically assigned CPU core 0 to avoid system start failure.Resource groups that you configure with CPUSET
have a higher priority on CPU resources. The maximum CPU resource usage percentage for all resource groups configured with CPUSET
on a segment host is the number of CPU cores reserved divided by the number of all CPU cores, multiplied by 100.
NoteYou must configure
CPUSET
for a resource group after you have enabled resource group-based resource management for your Greenplum Database cluster with the gp_resource_manager server configuration parameter.
You configure a resource group with CPU_MAX_PERCENT
in order to assign CPU resources by percentage. When you configure CPU_MAX_PERCENT
for a resource group, Greenplum Database deactivates CPUSET
for the group.
The parameter CPU_MAX_PERCENT
reserves the specified percentage of the segment CPU for resource management. The minimum CPU_MAX_PERCENT
percentage you can specify for a resource group is 1, the maximum is 100. The sum of CPU_MAX_PERCENT
s specified for all resource groups that you define in your Greenplum Database cluster can exceed 100. It specifies the total time ratio that all tasks in a resource group can run in one CPU cycle. Once the tasks in the resource group have used up all the time specified by the quota, they are throttled for the remainder of the time specified in that time period, and are not allowed to run until the next time period.
You set the parameter CPU_WEIGHT
to assign the scheduling priority of the current group. The default value is 100, and the range of values is 1 to 500. The value specifies the relative share of CPU time available to tasks in the resource group. For example, tasks in two resource groups with CPU_WEIGHT
set to 100 get the same CPU time, but tasks in a resource group with CPU_WEIGHT
set to 200 get twice the CPU time.
For example, for a high priority job that does not need too much CPU resources, configure the resource group with values CPU_MAX_PERCENT
=10 and CPU_WEIGHT
=500. For a low priority job that requires a high amount of CPU resources, configure the resource group with values CPU_MAX_PERCENT
=90 and CPU_WEIGHT
=10.
When tasks in a resource group are idle and not using any CPU time, the leftover time is collected in a global pool of unused CPU cycles. Other resource groups can borrow CPU cycles from this pool. The actual amount of CPU time available to a resource group may vary, depending on the number of resource groups present on the system.
For example, consider the following groups:
Group Name | CONCURRENCY | CPU_MAX_PERCENT | CPU_WEIGHT |
---|---|---|---|
default_group | 20 | 50 | 10 |
admin_group | 10 | 70 | 30 |
system_group | 10 | 30 | 10 |
test | 10 | 10 | 10 |
Roles in default_group
have an available CPU ratio (determined by CPU_WEIGHT
) of 10/(10+30+10+10)=16%. This means that they can use at least 16% of the CPU when the system workload is high. When the system has idle CPU resources, they can use more resources, as the hard limit (set by CPU_MAX_PERCENT
) is 50%.
Roles in admin_group
have an available CPU ratio of 30/(10+30+10+10)=50% when the system workload is high. When the system has idle CPU resources, they can use resources up to the hard limit of 70%.
Roles in test
have a CPU ratio of 10/(10+30+10+10)=16%. However, as the hard limit determined by CPU_MAX_PERCENT
is 10%, they can only use up to 10% of the resources even when the system is idle.
When you enable resource groups, memory usage is managed at the Greenplum Database segment and resource group levels. You can also manage memory at the transaction level. See Greenplum Database Memory Overview to estimate how much memory each Greenplum Database segment has available to use. This will help you estimate how much memory to assign to the resource groups.
The amount of memory allocated to a query is determined by the following parameters:
The parameter MEMORY_LIMIT
of a resource group sets the maximum amount of memory reserved for this resource group on a segment. This determines the total amount of memory that all worker processes for a query can consume on the segment host during query execution. The amount of memory allotted to a query is the group memory limit divided by the group concurrency limit: MEMORY_LIMIT
/ CONCURRENCY
.
If a query requires a large amount of memory, you may use the server configuration parameter gp_resgroup_memory_query_fixed_mem to set a fixed memory amount for the query at the session level. This parameter overrides and can surpass the allocated memory of the resource group.
Greenplum allocates memory for an incoming query using the gp_resgroup_memory_query_fixed_mem
value, if set, to bypass the resource group settings. Otherwise, it uses MEMORY_LIMIT
/ CONCURRENCY
as the memory allocated for the query. If MEMORY_LIMIT
is not set, the value for the query memory allocation defaults to statement_mem.
For all queries, if there is not enough memory in the system, they spill to disk. When the limit gp_workfile_limit_files_per_query is reached, Greenplum Database generates an out of memory (OOM) error.
For example, consider a resource group named adhoc
with MEMORY_LIMIT
set to 1.5 GB and CONCURRENCY
set to 3. By default, each statement submitted to the group is allocated 500 MB of memory. Now consider the following series of events:
User ADHOC_1
submits query Q1
, overriding gp_resgroup_memory_query_fixed_mem
to 800MB. The Q1
statement is admitted into the system.
User ADHOC_2
submits query Q2
, using the default 500MB.
With Q1
and Q2
still running, user ADHOC3
submits query Q3
, using the default 500MB.
Queries Q1
and Q2
have used 1300MB of the group's 1500MB. Therefore, Q3
must wait for Q1
or Q2
to complete before it can run.
User ADHOC4
submits query Q4
, using gp_resgroup_memory_query_fixed_mem
set to 700 MB.
Query Q4
runs immediately as it bypasses the resource group limits.
There are some special usage considerations regarding memory limits:
gp_resource_group_bypass
or gp_resource_group_bypass_catalog_query
to bypass the resource group limits, the memory limit for the query takes the value of statement_mem
.MEMORY_LIMIT
/ CONCURRENCY
) < statement_mem
, Greenplum Database uses statement_mem
as the fixed amount of memory allocated by query.statement_mem
is capped at max_statement_mem.MIN_COST
use a memory quota of statement_mem
.Greenplum Database resource groups use Linux Control Groups (cgroups) to manage CPU resources. There are two versions of cgroups: cgroup v1 and cgroup v2, which differ in the virtual file hierarchy implemented by v2. Greenplum Database uses cgroup v2 by default. For detailed information about cgroups, refer to the Control Groups documentation for your Linux distribution.
Verify what version of cgroup is configured in your environment by checking what filesystem is mounted by default during system boot:
stat -fc %T /sys/fs/cgroup/
For cgroup v1, the output is tmpfs
. For cgroup v2, output is cgroup2fs
.
Complete the following tasks on each node in your Greenplum Database cluster to set up cgroups v1 for use with resource groups:
If not already installed, install the Control Groups operating system package on each Greenplum Database node. The command that you run to perform this task will differ based on the operating system installed on the node. You must be the superuser or have sudo
access to run the command:
Redhat/Oracle/Rocky 8.x systems:
sudo yum install libcgroup-tools
Locate the cgroups configuration file /etc/cgconfig.conf
. You must be the superuser or have sudo
access to edit this file:
sudo vi /etc/cgconfig.conf
Add the following configuration information to the file:
group gpdb {
Add the following configuration information to the file:
group gpdb {
Add the following configuration information to the file:
group gpdb {
perm {
task {
uid = gpadmin;
gid = gpadmin;
}
admin {
uid = gpadmin;
gid = gpadmin;
}
}
cpu {
}
cpuacct {
}
cpuset {
}
memory {
}
}
This content configures CPU, CPU accounting, CPU core set, and memory control groups managed by the gpadmin
user. Greenplum Database uses the memory control group only for those resource groups created with the cgroup
MEMORY_AUDITOR
.
Start the cgroups service on each Greenplum Database node. The command that you run to perform this task will differ based on the operating system installed on the node. You must be the superuser or have sudo
access to run the command:
Redhat/Oracle/Rocky 8.x systems:
sudo cgconfigparser -l /etc/cgconfig.conf
Identify the cgroup
directory mount point for the node:
grep cgroup /proc/mounts
The first line of output identifies the cgroup
mount point.
Verify that you set up the Greenplum Database cgroups configuration correctly by running the following commands. Replace <cgroup_mount_point> with the mount point that you identified in the previous step:
ls -l <cgroup_mount_point>/cpu/gpdb
ls -l <cgroup_mount_point>/cpuacct/gpdb
ls -l <cgroup_mount_point>/cpuset/gpdb
ls -l <cgroup_mount_point>/memory/gpdb
If these directories exist and are owned by gpadmin:gpadmin
, you have successfully configured cgroups for Greenplum Database CPU resource management.
To automatically recreate Greenplum Database required cgroup hierarchies and parameters when your system is restarted, configure your system to enable the Linux cgroup service daemon cgconfig.service
(Redhat/Oracle/Rocky 8.x) at node start-up. For example, configure one of the following cgroup service commands in your preferred service auto-start tool:
Redhat/Oracle/Rocky 8.x systems:
sudo systemctl enable cgconfig.service
To start the service immediately (without having to reboot) enter:
sudo systemctl start cgconfig.service
You may choose a different method to recreate the Greenplum Database resource group cgroup hierarchies.
Configure the system to mount cgroups-v2
by default during system boot by the systemd
system and service manager:
sudo grubby --update-kernel=ALL --args=“systemd.unified_cgroup_hierarchy=1”
Reboot the system for the changes to take effect.
Create the directory /sys/fs/cgroup/gpdb
and ensure gpadmin
user has read and write permission on it.
mkdir -p /sys/fs/cgroup/gpdb
chmod +rw /sys/fs/cgroup/gpdb
Ensure that gpadmin
has read and write permission on /sys/fs/cgroup/cgroup.procs
.
chmod +rw /sys/fs/cgroup/cgroup.procs
When you install Greenplum Database, no resource management policy is enabled by default. To use resource groups, set the gp_resource_manager server configuration parameter.
Set the gp_resource_manager
server configuration parameter to the value "group-v1"
or "group-v2"
, depending on the version of cgroup configured on your Linux distribution. For example:
gpconfig -c gp_resource_manager -v "group-v2"
Restart Greenplum Database:
gpstop
gpstart
Once enabled, any transaction submitted by a role is directed to the resource group assigned to the role, and is governed by that resource group's concurrency, memory, and CPU limits.
Greenplum Database creates three default resource groups for roles named admin_group
, default_group
, and system_group
. When you enable resources groups, any role that was not explicitly assigned a resource group is assigned the default group for the role's capability. SUPERUSER
roles are assigned the admin_group
, non-admin roles are assigned the group named default_group
. The resources of the Greenplum Database system processes are assigned to the system_group
. You cannot manually assign any roles to the system_group
.
The default resource groups admin_group
, default_group
, and system_group
are created with the following resource limits:
Limit Type | admin_group | default_group | system_group |
---|---|---|---|
CONCURRENCY | 10 | 5 | 0 |
CPU_MAX_PERCENT | 10 | 20 | 10 |
CPU_WEIGHT | 100 | 100 | 100 |
CPUSET | -1 | -1 | -1 |
MEMORY_LIMIT | -1 | -1 | -1 |
MIN_COST | 0 | 0 | 0 |
When you create a resource group for a role, you provide a name and a CPU resource allocation mode (core or percentage). You can optionally provide a concurrent transaction limit, a memory limit, a CPU soft priority, and a minimum cost. Use the CREATE RESOURCE GROUP command to create a new resource group.
When you create a resource group for a role, you must provide a CPU_MAX_PERCENT
or CPUSET
limit value. These limits identify the percentage of Greenplum Database CPU resources to allocate to this resource group. You may specify a MEMORY_LIMIT
to reserve a fixed amount of memory for the resource group.
For example, to create a resource group named rgroup1 with a CPU limit of 20, a memory limit of 25, a CPU soft priority of 500 and a minimum cost of 50:
CREATE RESOURCE GROUP rgroup1 WITH (CPU_MAX_PERCENT=20, MEMORY_LIMIT=25, CPU_WEIGHT=500, MIN_COST=50);
The CPU limit of 20 is shared by every role to which rgroup1
is assigned. Similarly, the memory limit of 25 is shared by every role to which rgroup1
is assigned. rgroup1
utilizes the default CONCURRENCY
setting of 20.
The ALTER RESOURCE GROUP command updates the limits of a resource group. To change the limits of a resource group, specify the new values that you want for the group. For example:
ALTER RESOURCE GROUP rg_role_light SET CONCURRENCY 7;
ALTER RESOURCE GROUP exec SET MEMORY_LIMIT 30;
ALTER RESOURCE GROUP rgroup1 SET CPUSET '1;2,4';
NoteYou cannot set or alter the
CONCURRENCY
value for theadmin_group
to zero (0).
The DROP RESOURCE GROUP command drops a resource group. To drop a resource group for a role, the group cannot be assigned to any role, nor can there be any transactions active or waiting in the resource group.
To drop a resource group:
DROP RESOURCE GROUP exec;
Greenplum Database supports the Runaway detector, which automatically terminates queries based on the amount of memory used by the query. For resource group-managed queries, Greenplum Database terminates a running query based on the amount of memory used by the query. The relevant configuration parameters are:
gp_vmem_protect_limit sets the amount of memory that all postgres
processes of the active segment instance can consume. If a query causes this limit to be exceeded, no memory will be allocated and the query will fail.
runaway_detector_activation_percent. When resource groups are enabled, if the used memory exceeds the specified value gp_vmem_protect_limit
* runaway_detector_activation_percent
, Greenplum Database terminates queries based on memory usage, selecting queries from the queries managed by user resource groups (excluding those in the system_group
resource group). Greenplum Database starts with the query that consumes the largest amount of memory. The query will terminate until the percentage of memory used falls below the specified percentage.
You assign a resource group to a database role using the RESOURCE GROUP
clause of the CREATE ROLE or ALTER ROLE commands. If you do not specify a resource group for a role, the role is assigned the default group for the role's capability. SUPERUSER
roles are assigned the admin_group
, non-admin roles are assigned the group named default_group
.
Use the ALTER ROLE
or CREATE ROLE
commands to assign a resource group to a role. For example:
ALTER ROLE bill RESOURCE GROUP rg_light;
CREATE ROLE mary RESOURCE GROUP exec;
You can assign a resource group to one or more roles. If you have defined a role hierarchy, assigning a resource group to a parent role does not propagate down to the members of that role group.
If you wish to remove a resource group assignment from a role and assign the role the default group, change the role's group name assignment to NONE
. For example:
ALTER ROLE mary RESOURCE GROUP NONE;
Monitoring the status of your resource groups and queries may involve the following tasks.
The gp_resgroup_config gp_toolkit
system view displays the current limits for a resource group. To view the limits of all resource groups:
SELECT * FROM gp_toolkit.gp_resgroup_config;
The gp_resgroup_status gp_toolkit
system view enables you to view the status and activity of a resource group. The view displays the number of running and queued transactions. To view this information:
SELECT * FROM gp_toolkit.gp_resgroup_status;
The gp_resgroup_status_per_host gp_toolkit
system view enables you to view the real-time CPU and memory usage of a resource group on a per-host basis. To view this information:
SELECT * FROM gp_toolkit.gp_resgroup_status_per_host;
To view the resource group-to-role assignments, perform the following query on the pg_roles and pg_resgroup system catalog tables:
SELECT rolname, rsgname FROM pg_roles, pg_resgroup
WHERE pg_roles.rolresgroup=pg_resgroup.oid;
To view a resource group's running queries, pending queries, and how long the pending queries have been queued, examine the pg_stat_activity system catalog table:
SELECT query, waiting, rsgname, rsgqueueduration
FROM pg_stat_activity;
There may be cases when you want to cancel a running or queued transaction in a resource group. For example, you may want to remove a query that is waiting in the resource group queue but has not yet been run. Or, you may want to stop a running query that is taking too long to run, or one that is sitting idle in a transaction and taking up resource group transaction slots that are needed by other users.
By default, transactions can remain queued in a resource group indefinitely. If you want Greenplum Database to cancel a queued transaction after a specific amount of time, set the server configuration parameter gp_resource_group_queuing_timeout. When this parameter is set to a value (milliseconds) greater than 0, Greenplum cancels any queued transaction when it has waited longer than the configured timeout.
To manually cancel a running or queued transaction, you must first determine the process id (pid) associated with the transaction. Once you have obtained the process id, you can invoke pg_cancel_backend()
to end that process, as shown below.
For example, to view the process information associated with all statements currently active or waiting in all resource groups, run the following query. If the query returns no results, then there are no running or queued transactions in any resource group.
SELECT rolname, g.rsgname, pid, waiting, state, query, datname
FROM pg_roles, gp_toolkit.gp_resgroup_status g, pg_stat_activity
WHERE pg_roles.rolresgroup=g.groupid
AND pg_stat_activity.usename=pg_roles.rolname;
Sample partial query output:
rolname | rsgname | pid | waiting | state | query | datname
---------+----------+---------+---------+--------+------------------------ -+---------
sammy | rg_light | 31861 | f | idle | SELECT * FROM mytesttbl; | testdb
billy | rg_light | 31905 | t | active | SELECT * FROM topten; | testdb
Use this output to identify the process id (pid
) of the transaction you want to cancel, and then cancel the process. For example, to cancel the pending query identified in the sample output above:
SELECT pg_cancel_backend(31905);
You can provide an optional message in a second argument to pg_cancel_backend()
to indicate to the user why the process was cancelled.
NoteDo not use an operating system
KILL
command to cancel any Greenplum Database process.
A user with Greenplum Database superuser privileges can run the gp_toolkit.pg_resgroup_move_query()
function to move a running query from one resource group to another, without stopping the query. Use this function to expedite a long-running query by moving it to a resource group with a higher resource allotment or availability.
NoteYou can move only an active or running query to a new resource group. You cannot move a queued or pending query that is in an idle state due to concurrency or memory limits.
pg_resgroup_move_query()
requires the process id (pid) of the running query, as well as the name of the resource group to which you want to move the query. The signature of the function follows:
pg_resgroup_move_query( pid int4, group_name text );
You can obtain the pid of a running query from the pg_stat_activity
system view as described in Cancelling a Running or Queued Transaction in a Resource Group. Use the gp_toolkit.gp_resgroup_status
view to list the name, id, and status of each resource group.
When you invoke pg_resgroup_move_query()
, the query is subject to the limits configured for the destination resource group:
gp_resource_group_queuing_timeout
milliseconds if set.pg_resgroup_move_query()
tries to give slot control away to the target process for up to gp_resource_group_move_timeout
milliseconds. If target process can't handle movement request until gp_resource_group_queuing_timeout
exceeds, Greenplum Database returns the error: target process failed to move to a new group
.pg_resgroup_move_query()
was cancelled, but target process already got all slot controls, segment's processes will not be moved to new group, and target process will hold the slot. Such inconsistent state will be fixed by the end of transaction or by any next command dispatched by target process inside same transaction.group <group_name> doesn't have enough memory ...
. In this situation, you may choose to increase the group shared memory allotted to the destination resource group, or perhaps wait a period of time for running queries to complete and then invoke the function again.After Greenplum moves the query, there is no way to guarantee that a query currently running in the destination resource group does not exceed the group memory quota. In this situation, one or more running queries in the destination group may fail, including the moved query. Reserve enough resource group global shared memory to minimize the potential for this scenario to occur.
pg_resgroup_move_query()
moves only the specified query to the destination resource group. Greenplum Database assigns subsequent queries that you submit in the session to the original resource group.
Using VMware Greenplum Command Center, an administrator can create and manage resource groups, change roles' resource groups, and create workload management rules. Workload management assignment rules assign transactions to different resource groups based on user-defined criteria. If no assignment rule is matched, Greenplum Database assigns the transaction to the role's default resource group. Refer to the Greenplum Command Center documentation for more information about creating and managing resource groups and workload management rules.
Why is CPU usage lower than the CPU_MAX_PERCENT
configured for the resource group?
You may run into this situation when a low number of queries and slices are running in the resource group, and these processes are not utilizing all of the cores on the system.
**My resource group has a CPU_WEIGHT
equivalent to 40%. Why is the CPU usage never reaching this limit?
The value of CPU_MAX_PERCENT
might be lower than 40, hence it might be limiting the CPU usage even with idle resources.
Why is the number of running transactions lower than the CONCURRENCY
limit configured for the resource group?
Greenplum Database considers memory availability before running a transaction, and will queue the transaction if there is not enough memory available to serve it. If you use ALTER RESOURCE GROUP
to increase the CONCURRENCY
limit for a resource group but do not also adjust memory limits, currently running transactions may be consuming all allotted memory resources for the group. When in this state, Greenplum Database queues subsequent transactions in the resource group.
Why is the number of running transactions in the resource group higher than the configured CONCURRENCY
limit?
This behaviour is expected. There are several reasons why this may happen:
SET
, RESET
and SHOW
commandsgp_resource_group_bypass
disables the concurrent transaction limit for the resource group so a query can run immediately.gp_resource_group_bypass_catalog_query
is set to true (the default), all queries that read exclusively from system catalogs, or queries that contain in their query text pg_catalog
schema tables only will not enforce the limits of the resource group.MIN_COST
will be automatically unassigned from their resource group and will not enforce any of the limits set for this.Why did my query return a "memory limit reached" error?
Greenplum Database automatically adjusts transaction and group memory to the new settings when you use ALTER RESOURCE GROUP
to change a resource group's memory and/or concurrency limits. An "out of memory" error may occur if you recently altered resource group attributes and there is no longer a sufficient amount of memory available for a currently running query.
My query cannot run due to insufficient memory, resulting in memory leak Out of Memory (OOM).
First, ensure that the resource group is allocating enough memory required by the query by tuning resource group parameters such as CONCURRENCY
and MEMORY_LIMIT
. Analyze the type of query, whether there will be a lot of intermediate results using memory. If it does exist, you can set a reasonable gp_resgroup_memory_query_fixed_mem
to allocate more memory at the session level for this specific query.
After a memory leak OOM the system has a high concurrent load.
When the system starts to clean up the sessions left over by the memory leak, the concurrent load of the system is high at this time, and the OOM error message may reappear. Due to the current design, we cannot expedite the cleanup process of the Runaway Session. The solution to this problem is to adjust the runaway_detector_activation_percent
to 0.85 or 0.8, or even lower, in order to increase the available memory of the segment host.
Some transaction requests only run during a certain period of time, and do not run at other times.
You may change the configuration of resource groups can be changed dynamically at regular intervals to match the requirements of your workload, and customize resource allocation at different times to achieve higher efficiency. For example, change the configuration of resources within a group, add or delete resource groups.
After upgrading Greenplum Database, the performance seems to be degraded.
There are many factors that can affect performance degradation. A possible cause is that after upgrading to Greenplum Database 7, SWAP is enabled by default and hence affecting your performance. We recommend disabling SWAP and use RAM memory instead, in order to improve running speed and efficiency. If your memory configuration is sufficient, there is no need to use SWAP space. If you decide to use SWAP, be sure you understand how it takes part in the calculation of memory management allocation. See Greenplum Database Memory Overview for more information.