The pg_cron
module is a cron-based job scheduler that runs inside the database.
The pg_cron
module is installed when you install Greenplum Database. Before you can use it, you must perform the following steps:
You can only install the pg_cron
module in one database per Greenplum cluster. First, set the database where you want pg_cron
to create its metadata tables, by default it uses the postgres
database. In order to change it, run the following command and restart Greenplum Database:
gpconfig -c cron.database_name -v 'db_name' --skipvalidation
Enable the extension as a preloaded library. First, check if there are any preloaded shared libraries by running the gpconfig
command. The following example shows that the auto_explain
libraries are already enabled for the cluster:
gpconfig -s shared_preload_libraries
Values on all segments are consistent
GUC : shared_preload_libraries
Coordinator value: auto_explain
Segment value: auto_explain
Use the output of the above command to enable the pg_cron
module, along any other shared libraries, and restart Greenplum Database:
gpconfig -c shared_preload_libraries -v 'auto_explain,pg_cron'
gpstop -ar
Register the pg_cron
extension in the desired database. You can only install pg_cron
in one database per cluster.
CREATE EXTENSION pg_cron;
Optionally, grant usage to specific users:
GRANT USAGE ON SCHEMA cron TO user1;
Refer to Installing Additional Supplied Modules for more information.
The pg_cron
module is installed when you install or upgrade Greenplum Database. A previous version of the extension will continue to work in existing databases after you upgrade Greenplum. To upgrade to the most recent version of the extension, you must:
ALTER EXTENSION pg_cron UPDATE TO '1.6.2';
Use the user-defined functions (UDFs) under the cron
schema to schedule cron jobs in your default database. For example, configure a job that deletes old database data every Saturday at 3:30am (GMT):
SELECT cron.schedule('30 3 * * 6', $$DELETE FROM events WHERE event_time < now() - interval '1 week'$$);
schedule
----------
42
Optionally, you may update the database column for the job that you just created so that it runs in another database within your Greenplum cluster. Run the following command as a user with the superuser role:
UPDATE cron.job SET database = 'database1' WHERE jobid = 106;
SELECT cron.reload_job();
You may use the UDF cron.schedule_in_database()
to schedule jobs in multiple databases. Below are the function arguments:
cron.schedule_in_database(job_name text,
schedule text,
command text,
database text,
username text default null,
active boolean default 'true')
For example, run VACUUM
every Sunday at 4:00am (GMT) in a database other than the one pg_cron
is installed on:
SELECT cron.schedule_in_database('weekly-vacuum', '0 4 * * 0', 'VACUUM', 'some_other_database');
schedule
----------
44
ImportantSince the
TRIGGER
statement is not supported in Greenplum, if a user runs anUPDATE
orINSERT
statement, or manually deletes any of the cron jobs instead of using the UDFs in thecron
schema, you must run the following UDF to update thepg_cron
cache:SELECT cron.reload_job();
NoteThe server configuration parameter
cron.use_background_workers
is not supported on VMware Greenplum 6.
Refer to the pg_cron github documentation for more information about using the module.