Use this procedure to migrate a VMware Greenplum Database installation from Enterprise Linux (EL) version 7 to Enterprise Linux 8 or Enterprise Linux 9, while maintaining your existing version of Greenplum Database.

Enterprise Linux includes CentOS, Rocky, Redhat (RHEL), and Oracle Linux (OEL) as the variants supported by Greenplum. See Platform Requirements for a list of the supported operating systems.

Major version upgrades of Linux operating systems are always a complex task in a Greenplum environment. You must weigh the risks of the different upgrade methods, as well as consider the impact of the required downtime.

Important Upgrade Considerations

The GNU C Library, commonly known as glibc, is the GNU Project's implementation of the C standard library. Between EL 7 and 8, the version of glibc changes from 2.17 to 2.28, and between EL 7 and EL 9, the version of glibc changes from 2.17 to 2.34. These are major changes that impact many languages and their collations. The collation of a database specifies how to sort and compare strings of character data. A change in sorting for common languages can have a significant impact on PostgreSQL and Greenplum databases.

PostgreSQL and Greenplum databases use locale data provided by the operating system’s C library for sorting text. Sorting happens in a variety of contexts, including for user output, merge joins, B-tree indexes, and range partitions. In the latter two cases, sorted data is persisted to disk. If the locale data in the C library changes during the lifetime of a database, the persisted data may become inconsistent with the expected sort order, which could lead to erroneous query results and other incorrect behavior.

If an index is not sorted in a way that an index scan is expecting it, a query could fail to find data, and an update could insert duplicate data. Similarly, in a partitioned table, a query could look in the wrong partition and an update could write to the wrong partition. It is essential to the correct operation of a database that you are aware of and understand any locale definition changes. Below are examples of the impact from locale changes in an EL 7 to EL 8 or EL 9 upgrade:

Example 1 A range-partitioned table using default partitions displaying the rows in an incorrect order after an upgrade:

CREATE TABLE partition_range_test_3(id int, date text) DISTRIBUTED BY (id)
PARTITION BY RANGE (date)
      (
        PARTITION jan START ('01') INCLUSIVE,
        PARTITION feb START ('"02"') INCLUSIVE,
        PARTITION mar START ('"03"') INCLUSIVE );

INSERT INTO partition_range_test_3 VALUES (1, '01'), (1, '"01"'), (1, '"02"'), (1, '02'), (1, '03'), (1, '"03"'), (1, '04'), (1, '"04"');

Results for EL 7:

# SELECT * FROM partition_range_test_3 ORDER BY date;
 id | date
----+------
  1 | "01"
  1 | 01
  1 | "02"
  1 | 02
  1 | "03"
  1 | 03
  1 | "04"
  1 | 04
(8 rows)

# SELECT * FROM partition_range_test_3_1_prt_jan;
 id | date
----+------
  1 | 01
  1 | "01"
  1 | 02
(3 rows)

# SELECT * FROM partition_range_test_3_1_prt_feb;
 id | date
----+------
  1 | "02"
  1 | 03
(2 rows)

After upgrading to EL 8:

# SELECT * FROM partition_range_test_3 WHERE date='03';
 id | date
----+------
(0 rows)

=# EXPLAIN SELECT * FROM partition_range_test_3 WHERE date='03';
                                           QUERY PLAN
------------------------------------------------------------------------------------------------
 Gather Motion 4:1  (slice1; segments: 4)  (cost=0.00..720.00 rows=50 width=36)
   ->  Append  (cost=0.00..720.00 rows=13 width=36)
         ->  Seq Scan on partition_range_test_3_1_prt_mar  (cost=0.00..720.00 rows=13 width=36)
               Filter: (date = '03'::text)
 Optimizer: Postgres query optimizer
(5 rows)

# SELECT * FROM partition_range_test_3_1_prt_feb;
 id | date
----+------
  1 | "02"
  1 | 03
(2 rows)

Example 2 A range-partitioned table not using a default partition encountering errors after the upgrade.

CREATE TABLE partition_range_test_2 (id int, date text) DISTRIBUTED BY (id)
PARTITION BY RANGE (date)
      (PARTITION Jan START ( '01') INCLUSIVE ,
      PARTITION Feb START ( '02') INCLUSIVE ,
      PARTITION Mar START ( '03') INCLUSIVE
      END ( '04') EXCLUSIVE);

INSERT INTO partition_range_test_2 VALUES (1, '01'), (1, '"01"'), (2, '"02"'), (2, '02'), (3, '03'), (3, '"03"');

Results for EL 7:

# SELECT * FROM partition_range_test_2 ORDER BY date;
id | date
----+------
  1 | 01
  1 | "01"
  2 | 02
  2 | "02"
  3 | 03
  3 | "03"

After upgrading to EL 8:

# SELECT * FROM partition_range_test_2 ORDER BY date;
id | date
----+------
  1 | 01
  2 | "02"
  2 | 02
  3 | "03"
  3 | 03
(5 rows)

# INSERT INTO partition_range_test_2 VALUES (1, '"01"');
ERROR:  no partition of relation "partition_range_test_2" found for row  (seg1 10.80.0.2:7003 pid=40499)
DETAIL:  Partition key of the failing row contains (date) = ("01").

You must take the following into consideration when planning an upgrade from EL 7 to EL 8 or EL 9:

  • When using an in-place upgrade method, all indexes involving columns of collatable data type, such as text, varchar, char, and citext, must be reindexed before the database instance is put into production.
  • When using an in-place upgrade method, range-partitioned tables using collatable data types in the partition key should be checked to verify that all rows are still in the correct partitions.
  • To avoid downtime due to reindexing or repartitioning, consider upgrading using Greenplum Copy or Greenplum Backup and Restore instead of an in-place upgrade.
  • When using an in-place upgrade method, databases or table columns using the C or POSIX locales are not affected. All other locales are potentially affected.

Upgrade Methods

The following methods are the currently supported options to perform a major version upgrade from EL 7 to EL 8 or EL 9 with Greenplum Database.

  • Using Greenplum Copy Utility to copy from Greenplum on EL 7 to a separate Greenplum on EL 8 or EL 9.
  • Using Greenplum Backup and Restore to restore a backup taken from Greenplum on EL 7 to a separate Greenplum on EL 8 or EL 9.
  • Using operating system vendor supported utilities, such as leapp to perform an in-place, simultaneous upgrade of EL 7 to EL 8 or EL 9 for all Greenplum hosts in a cluster then following the required post upgrade steps.
Note

Greenplum does not support a rolling upgrade, such that some Greenplum Segment Hosts are operating with EL 7 and others with EL 8 or EL 9. All Segment Hosts must be upgraded together or otherwise before Greenplum is started and workload continued after an upgrade.

Greenplum Copy Utility

The Greenplum Copy Utility is a utility for transferring data between databases in different Greenplum Database systems.

This utility is compatible with the Greenplum Database cluster from the source and destination running on different operating systems, including EL 7 to EL 8 or EL 9. The glibc changes are not relevant for this migration method because the data is rewritten on copy to the target cluster, which addresses any locale sorting changes. However, since Greenplum Copy enables the option -parallelize-leaf-partitions by default, which copies the leaf partition tables of a partitioned table in parallel, it may lead to data being copied to an incorrect partition caused by the glibc changes. You must disable this option so that the table is copied as one single table based on the root partition table.

As part of the overall process of this upgrade method, you:

  • Create a new Greenplum cluster using EL 8 or EL 9 with no data.
  • Address any Operating System Configuration Differences.
  • Use gpcopy to migrate data from the source Greenplum cluster on EL 7 to the destination Greenplum cluster on EL 8 or EL 9. You must disable the option -parallelize-leaf-partitions to ensure that partitioned tables are copied as one single table based on the root partition.
  • Remove the source Greenplum cluster from the EL 7 systems.

The advantages of this method are optimized performance, migration issues not impacting the source cluster, and that it does not require table locks. The disadvantage of this method is that it requires two separate Greenplum clusters during the migration.

Greenplum Backup and Restore

Greenplum Backup and Restore supports parallel and non-parallel methods for backing up and restoring databases.

The utility is compatible with the Greenplum Database cluster from the source and destination running on different operating systems, including EL 7 to EL 8 or EL 9. The glibc changes are not relevant for this migration method because the data is rewritten on the new cluster, which addresses any locale sorting changes. However, if the backup command includes the option --leaf-partition-data, it creates one data file per leaf partition, instead of one data file for the entire table. In this situation, when you restore the partition data to the upgraded cluster, the utility copies the data directly into the leaf partitions, which may lead to data being copied into an incorrect partition caused by the glibc changes. Therefore, you must ensure that the backup command does not use the option --leaf-partition-data so partitioned tables are copied as a single data file.

Greenplum Backup and Restore supports many different options for storage locations, including local, public cloud storage such as S3, and Dell EMC Data Domain through the use of the gpbackup storage plugins. Any of the supported options for storage locations to perform the data transfer are supported for the EL 7 to EL 8 or EL 9 upgrade.

As part of the overall process of this upgrade method, you:

  • Create a new Greenplum cluster on the EL 8 or EL 9 systems with no data.
  • Address any Operating System Configuration Differences.
  • Use gpbackup to take a full backup of the source Greenplum cluster on EL 7. Ensure that you are not using the option --leaf-partition-data.
  • Restore the backup with gprestore to the destination Greenplum cluster on EL 8 or EL 9.
  • Remove the source Greenplum cluster on the EL 7 systems.

The advantages of this method are different options for storage locations, and migration issues not impacting the source cluster. The disadvantage of this method is that it requires two separate Greenplum clusters during the migration. It is also generally slower than Greenplum Copy, and it requires table locks to perform a full backup.

Simultaneous, In-Place Upgrade

Redhat and Oracle Linux both support options for in-place upgrade of the operating system using the Leapp utility.

Note

In-Place upgrades with the Leapp utility are not supported with Rocky or CentOS Linux. You must use Greenplum Copy or Greenplum Backup and Restore instead.

Greenplum Database includes the el8_migrate_locale utility which helps you identify and address the main challenges associated with an in-place upgrade from EL 7 to 8 or EL 9 caused by the glibc GNU C library changes.

As part of the overall process of this upgrade method, you:

  • Run the el8_migrate_locale utility to identify any objects whose data the upgrade might affect.

  • Stop the Greenplum cluster and use Leapp to run an in-place upgrade of the operating system.

  • Reinstall the Greenplum and plugin installation packages that are compatible with the upgraded system.

    Important

    You must manually perform this step by downloading the package from Broadcom Support Portal and then running an explicit yum remove followed by a yum install.

  • Address any required operating system configuration differences and start the Greenplum cluster.

  • Follow the required steps given by the el8_migrate_locale utility for fixing the data that is impacted by the glibc locale sorting changes.

The advantage of this method is that it does not require two different Greenplum clusters. The disadvantages are the risk of performing an in-place operating system upgrade, no downgrade options after any issues, the risk of issues that could leave your cluster in a non-operating state, and the requirement of additional steps after the upgrade is complete to address the glibc changes. You must also plan downtime of your Greenplum database for the entire process.

Continue reading for a detailed list of steps to upgrade your cluster using this method.

Important

We recommend you take a backup of your cluster before proceeding with this method, as you will not be able to recover the database if the upgrade does not complete successfully. You may also be prepared to contact your operating system vendor for any issues encountered with the Leapp utility.

Before you begin the upgrade, run the following commands:

Identify Impacted Objects

  • Identify Both Index and Partitioned Tables:

    el8_migrate_locale identify
    

    Outputs information about impacted index and partitioned tables to STDOUT.

    Sample output:

    2024-04-25 18:20:57,085 - INFO - There are 2 catalog indexes that needs reindex when doing OS upgrade from EL7 to EL8.
    indexrelid|indexname                 |tablename    |collname|pg_get_indexdef                                                                                                          
    ----------+--------------------------+-------------+--------+-------------------------------------------------------------------------------------------------------------------------
    3597      |pg_seclabel_object_index  |pg_seclabel  |default |CREATE UNIQUE INDEX pg_seclabel_object_index ON pg_catalog.pg_seclabel USING btree (objoid, classoid, objsubid, provider)
    3593      |pg_shseclabel_object_index|pg_shseclabel|default |CREATE UNIQUE INDEX pg_shseclabel_object_index ON pg_catalog.pg_shseclabel USING btree (objoid, classoid, provider)      
    (2 rows)
    
    
    2024-04-25 18:20:57,213 - INFO - There are 5 user indexes in database test that needs reindex when doing OS upgrade from EL7 to EL8.
    indexrelid|indexname        |tablename          |collname|pg_get_indexdef                                                                      
    ----------+-----------------+-------------------+--------+-------------------------------------------------------------------------------------
    16392     |"test_id2 \ $ \\"|test_character_type|default |CREATE INDEX "test_id2 \ $ \\" ON public.test_character_type USING btree (varchar_10)
    16391     |"test_id1 's "   |test_character_type|default |CREATE INDEX "test_id1 's " ON public.test_character_type USING btree (char_1)       
    16486     |test_idx_citext  |test_citext        |default |CREATE INDEX test_idx_citext ON public.test_citext USING btree (nick)                
    16484     |test_citext_pkey |test_citext        |default |CREATE UNIQUE INDEX test_citext_pkey ON public.test_citext USING btree (nick)        
    16393     |" test_id "" 3 " |test_character_type|default |CREATE INDEX " test_id "" 3 " ON public.test_character_type USING btree (txt)        
    (5 rows)
    
    
    2024-04-25 18:20:57,465 - WARNING - There are 1 tables in database test that the distribution key is using custom operator class, should be checked when doing OS upgrade from EL7 to EL8.
    tablename  |distclass
    -----------+---------
    test_citext|16435    
    (1 row)
    
    
    2024-04-25 18:20:57,521 - WARNING - There are 7 range partitioning tables with partition key in collate types(like varchar, char, text) in database test, these tables might be affected due to Glibc upgrade and should be checked when doing OS upgrade from EL7 to EL8.
    parrelid|tablename             |collation|attname|hasdefaultpartition
    --------+----------------------+---------+-------+-------------------
    16487   |partition_range_test_3|100      |date   |f                  
    16515   |partition_range_test_4|100      |date   |t                  
    16631   |partition_range_test_1|100      |date   |t                  
    16658   |partition_range_test_2|100      |date   |f                  
    16576   |testddlwithnodata     |100      |date   |f                  
    16597   |testddlwithdata       |100      |date   |t                  
    16549   |testddl               |100      |date   |t                  
    (7 rows)
    
  • Identify Impacted Indices Only:

    el8_migrate_locale identify --index
    

    Outputs information about impacted indices only to STDOUT.

    Sample output:

    2024-04-25 18:40:50,288 - INFO - There are 2 catalog indexes that needs reindex when doing OS upgrade from EL7 to EL8.
    indexrelid|indexname                 |tablename    |collname|pg_get_indexdef                                                                                                          
    ----------+--------------------------+-------------+--------+-------------------------------------------------------------------------------------------------------------------------
    3597      |pg_seclabel_object_index  |pg_seclabel  |default |CREATE UNIQUE INDEX pg_seclabel_object_index ON pg_catalog.pg_seclabel USING btree (objoid, classoid, objsubid, provider)
    3593      |pg_shseclabel_object_index|pg_shseclabel|default |CREATE UNIQUE INDEX pg_shseclabel_object_index ON pg_catalog.pg_shseclabel USING btree (objoid, classoid, provider)      
    (2 rows)
    
    
    2024-04-25 18:40:50,416 - INFO - There are 5 user indexes in database test that needs reindex when doing OS upgrade from EL7 to EL8.
    indexrelid|indexname        |tablename          |collname|pg_get_indexdef                                                                      
    ----------+-----------------+-------------------+--------+-------------------------------------------------------------------------------------
    16392     |"test_id2 \ $ \\"|test_character_type|default |CREATE INDEX "test_id2 \ $ \\" ON public.test_character_type USING btree (varchar_10)
    16391     |"test_id1 's "   |test_character_type|default |CREATE INDEX "test_id1 's " ON public.test_character_type USING btree (char_1)       
    16486     |test_idx_citext  |test_citext        |default |CREATE INDEX test_idx_citext ON public.test_citext USING btree (nick)                
    16484     |test_citext_pkey |test_citext        |default |CREATE UNIQUE INDEX test_citext_pkey ON public.test_citext USING btree (nick)        
    16393     |" test_id "" 3 " |test_character_type|default |CREATE INDEX " test_id "" 3 " ON public.test_character_type USING btree (txt)        
    (5 rows)
    
  • Identify Impacted Tables Only:

    el8_migrate_locale identify --table
    

    Outputs information about impacted partitioned tables only to STDOUT.

    Sample output:

    2024-04-25 18:41:47,151 - WARNING - There are 1 tables in database test that the distribution key is using custom operator class, should be checked when doing OS upgrade from EL7 to EL8.
    tablename  |distclass
    -----------+---------
    test_citext|16435    
    (1 row)
    
    
    2024-04-25 18:41:47,207 - WARNING - There are 7 range partitioning tables with partition key in collate types(like varchar, char, text) in database test, these tables might be affected due to Glibc upgrade and should be checked when doing OS upgrade from EL7 to EL8.
    parrelid|tablename             |collation|attname|hasdefaultpartition
    --------+----------------------+---------+-------+-------------------
    16487   |partition_range_test_3|100      |date   |f                  
    16515   |partition_range_test_4|100      |date   |t                  
    16631   |partition_range_test_1|100      |date   |t                  
    16658   |partition_range_test_2|100      |date   |f                  
    16576   |testddlwithnodata     |100      |date   |f                  
    16597   |testddlwithdata       |100      |date   |t                  
    16549   |testddl               |100      |date   |t                  
    (7 rows)
    

Prepare for Upgrade

  • Prepare Both Index and Partitioned Tables:

    el8_migrate_locale prepare --output prepare.out
    

    Outputs commands for migrating or recreating both impacted index and partitioned tables to the specified output file prepare.out.

    Sample output:

    \c  postgres
    -- catalog indexrelid: 3597 | index name: pg_seclabel_object_index | table name: pg_seclabel | collname: default | indexdef:  CREATE UNIQUE INDEX pg_seclabel_object_index ON pg_catalog.pg_seclabel USING btree (objoid, classoid, objsubid, provider)
    reindex index pg_seclabel_object_index;
    
    -- catalog indexrelid: 3593 | index name: pg_shseclabel_object_index | table name: pg_shseclabel | collname: default | indexdef:  CREATE UNIQUE INDEX pg_shseclabel_object_index ON pg_catalog.pg_shseclabel USING btree (objoid, classoid, provider)
    reindex index pg_shseclabel_object_index;
    
    \c  test
    -- indexrelid: 16392 | index name: "test_id2 \ $ \\" | table name: test_character_type | collname: default | indexdef:  CREATE INDEX "test_id2 \ $ \\" ON public.test_character_type USING btree (varchar_10)
    reindex index "test_id2 \ $ \\";
    
    -- indexrelid: 16391 | index name: "test_id1 's " | table name: test_character_type | collname: default | indexdef:  CREATE INDEX "test_id1 's " ON public.test_character_type USING btree (char_1)
    reindex index "test_id1 's ";
    
    -- indexrelid: 16486 | index name: test_idx_citext | table name: test_citext | collname: default | indexdef:  CREATE INDEX test_idx_citext ON public.test_citext USING btree (nick)
    reindex index test_idx_citext;
    
    -- indexrelid: 16484 | index name: test_citext_pkey | table name: test_citext | collname: default | indexdef:  CREATE UNIQUE INDEX test_citext_pkey ON public.test_citext USING btree (nick)
    reindex index test_citext_pkey;
    
    -- indexrelid: 16393 | index name: " test_id "" 3 " | table name: test_character_type | collname: default | indexdef:  CREATE INDEX " test_id "" 3 " ON public.test_character_type USING btree (txt)
    reindex index " test_id "" 3 ";
    
    -- order table by size in descending order
    \c  test
    
    -- parrelid: 16487 | coll: 100 | attname: date | msg: partition table, 3 leafs, size 98304
    begin; create temp table partition_range_test_3_bak as select * from partition_range_test_3; truncate partition_range_test_3; insert into partition_range_test_3 select * from partition_range_test_3_bak; commit;
    
    -- parrelid: 16515 | coll: 100 | attname: date | msg: partition table, 4 leafs, size 98304
    begin; create temp table partition_range_test_4_bak as select * from partition_range_test_4; truncate partition_range_test_4; insert into partition_range_test_4 select * from partition_range_test_4_bak; commit;
    
    -- parrelid: 16631 | coll: 100 | attname: date | msg: partition table, 3 leafs, size 98304
    begin; create temp table partition_range_test_1_bak as select * from partition_range_test_1; truncate partition_range_test_1; insert into partition_range_test_1 select * from partition_range_test_1_bak; commit;
    
    -- parrelid: 16658 | coll: 100 | attname: date | msg: partition table, 3 leafs, size 98304
    begin; create temp table partition_range_test_2_bak as select * from partition_range_test_2; truncate partition_range_test_2; insert into partition_range_test_2 select * from partition_range_test_2_bak; commit;
    
    -- parrelid: 16597 | coll: 100 | attname: date | msg: partition table, 4 leafs, size 65536
    begin; create temp table testddlwithdata_bak as select * from testddlwithdata; truncate testddlwithdata; insert into testddlwithdata select * from testddlwithdata_bak; commit;
    
    -- parrelid: 16549 | coll: 100 | attname: date | msg: partition table, 3 leafs, size 32768
    begin; create temp table testddl_bak as select * from testddl; truncate testddl; insert into testddl select * from testddl_bak; commit;
    
    -- parrelid: 16576 | coll: 100 | attname: date | msg: partition table, 2 leafs, size 0
    begin; create temp table testddlwithnodata_bak as select * from testddlwithnodata; truncate testddlwithnodata; insert into testddlwithnodata select * from testddlwithnodata_bak; commit;
    
  • Prepare Indices Only:

    el8_migrate_locale prepare --output prepare_index.out --index
    

    Outputs commands for migrating or recreating only impacted indices to the specified output file prepare_index.out.

    Sample output:

    \c  postgres
    -- catalog indexrelid: 3597 | index name: pg_seclabel_object_index | table name: pg_seclabel | collname: default | indexdef:  CREATE UNIQUE INDEX pg_seclabel_object_index ON pg_catalog.pg_seclabel USING btree (objoid, classoid, objsubid, provider)
    reindex index pg_seclabel_object_index;
    
    -- catalog indexrelid: 3593 | index name: pg_shseclabel_object_index | table name: pg_shseclabel | collname: default | indexdef:  CREATE UNIQUE INDEX pg_shseclabel_object_index ON pg_catalog.pg_shseclabel USING btree (objoid, classoid, provider)
    reindex index pg_shseclabel_object_index;
    
    \c  test
    -- indexrelid: 16392 | index name: "test_id2 \ $ \\" | table name: test_character_type | collname: default | indexdef:  CREATE INDEX "test_id2 \ $ \\" ON public.test_character_type USING btree (varchar_10)
    reindex index "test_id2 \ $ \\";
    
    -- indexrelid: 16391 | index name: "test_id1 's " | table name: test_character_type | collname: default | indexdef:  CREATE INDEX "test_id1 's " ON public.test_character_type USING btree (char_1)
    reindex index "test_id1 's ";
    
    -- indexrelid: 16486 | index name: test_idx_citext | table name: test_citext | collname: default | indexdef:  CREATE INDEX test_idx_citext ON public.test_citext USING btree (nick)
    reindex index test_idx_citext;
    
    -- indexrelid: 16484 | index name: test_citext_pkey | table name: test_citext | collname: default | indexdef:  CREATE UNIQUE INDEX test_citext_pkey ON public.test_citext USING btree (nick)
    reindex index test_citext_pkey;
    
    -- indexrelid: 16393 | index name: " test_id "" 3 " | table name: test_character_type | collname: default | indexdef:  CREATE INDEX " test_id "" 3 " ON public.test_character_type USING btree (txt)
    reindex index " test_id "" 3 ";
    
  • Prepare Tables Only:

    el8_migrate_locale prepare --output prepare_table.out --table
    

    Outputs commands for migrating or recreating only impacted partitioned tables to the specified output file prepare_table.out.

    Sample output:

    -- order table by size in descending order
    \c  test
    
    -- parrelid: 16487 | coll: 100 | attname: date | msg: partition table, 3 leafs, size 98304
    begin; create temp table partition_range_test_3_bak as select * from partition_range_test_3; truncate partition_range_test_3; insert into partition_range_test_3 select * from partition_range_test_3_bak; commit;
    
    -- parrelid: 16515 | coll: 100 | attname: date | msg: partition table, 4 leafs, size 98304
    begin; create temp table partition_range_test_4_bak as select * from partition_range_test_4; truncate partition_range_test_4; insert into partition_range_test_4 select * from partition_range_test_4_bak; commit;
    
    -- parrelid: 16631 | coll: 100 | attname: date | msg: partition table, 3 leafs, size 98304
    begin; create temp table partition_range_test_1_bak as select * from partition_range_test_1; truncate partition_range_test_1; insert into partition_range_test_1 select * from partition_range_test_1_bak; commit;
    
    -- parrelid: 16658 | coll: 100 | attname: date | msg: partition table, 3 leafs, size 98304
    begin; create temp table partition_range_test_2_bak as select * from partition_range_test_2; truncate partition_range_test_2; insert into partition_range_test_2 select * from partition_range_test_2_bak; commit;
    
    -- parrelid: 16597 | coll: 100 | attname: date | msg: partition table, 4 leafs, size 65536
    begin; create temp table testddlwithdata_bak as select * from testddlwithdata; truncate testddlwithdata; insert into testddlwithdata select * from testddlwithdata_bak; commit;
    
    -- parrelid: 16549 | coll: 100 | attname: date | msg: partition table, 3 leafs, size 32768
    begin; create temp table testddl_bak as select * from testddl; truncate testddl; insert into testddl select * from testddl_bak; commit;
    
    -- parrelid: 16576 | coll: 100 | attname: date | msg: partition table, 2 leafs, size 0
    begin; create temp table testddlwithnodata_bak as select * from testddlwithnodata; truncate testddlwithnodata; insert into testddlwithnodata select * from testddlwithnodata_bak; commit;
    

Examine the output files to identify which indexes and range-partitioned tables may be affected by the glibc GNU C library changes. The provided information will help you estimate the amount of work required during the upgrade process before you perform the OS upgrade. In order to address the issues caused to the range-partitioned tables, the utility rebuilds the affected tables at a later step. This can result in additional space requirements for your database, so you must account for the additional database space reported by these commands.

Perform the Upgrade

Stop the Greenplum Database cluster and use the Leapp utility to run the in-place upgrade for your operating system. Visit the Redhat Documentation and the Oracle Documentation (use this link for version 9) for more information on how to use the utility.

Once the upgrade is complete, address any Operating System Configuration Differences, and start the Greenplum Database cluster.

Fix the Impacted Data

Indexes

You must reindex all indexes involving columns of collatable data types (text, varchar, char, and citext) before the database instance is put into production.

Run the utility with the migrate subcommand to reindex the necessary indexes.

python el8_migrate_locale migrate --input prepare_index.out
Range-Partitioned Tables

You must check range-partitioned tables that use collatable data types in the partition key to verify that all rows are still in the correct partitions.

First, run utility with the validate subcommand to verify if the rows are still in the correct partitions after the operating system upgrade.

el8_migrate_locale validate

The utility validates the data correctness of partitioned tables using gp_detect_data_correctness.

Sample output:

2024-04-25 18:44:41,191 - INFO - worker[0]: begin: 
2024-04-25 18:44:41,191 - INFO - worker[0]: connect to <template1> ...
2024-04-25 18:44:41,197 - INFO - worker[0]: finish.
2024-04-25 18:44:41,249 - INFO - worker[0]: begin: 
2024-04-25 18:44:41,249 - INFO - worker[0]: connect to <postgres> ...
2024-04-25 18:44:41,256 - INFO - worker[0]: finish.
2024-04-25 18:44:41,312 - WARNING - There are 7 range partitioning tables with partition key in collate types(like varchar, char, text) in database test, these tables might be affected due to Glibc upgrade and should be checked when doing OS upgrade from EL7 to EL8.
parrelid|tablename             |collation|attname|hasdefaultpartition
--------+----------------------+---------+-------+-------------------
16487   |partition_range_test_3|100      |date   |f                  
16515   |partition_range_test_4|100      |date   |t                  
16631   |partition_range_test_1|100      |date   |t                  
16658   |partition_range_test_2|100      |date   |f                  
16576   |testddlwithnodata     |100      |date   |f                  
16597   |testddlwithdata       |100      |date   |t                  
16549   |testddl               |100      |date   |t                  
(7 rows)


2024-04-25 18:44:41,312 - INFO - worker[0]: begin: 
2024-04-25 18:44:41,312 - INFO - worker[0]: connect to <test> ...
2024-04-25 18:44:41,355 - INFO - start checking table partition_range_test_3_1_prt_mar ...
2024-04-25 18:44:41,406 - INFO - check table partition_range_test_3_1_prt_mar OK.
2024-04-25 18:44:41,406 - INFO - start checking table partition_range_test_3_1_prt_feb ...
2024-04-25 18:44:41,434 - INFO - check table partition_range_test_3_1_prt_feb OK.
2024-04-25 18:44:41,434 - INFO - start checking table partition_range_test_3_1_prt_jan ...
2024-04-25 18:44:41,456 - INFO - check table partition_range_test_3_1_prt_jan OK.
2024-04-25 18:44:41,500 - INFO - Current progress: have 6 remaining, 0.19 seconds passed.
2024-04-25 18:44:41,511 - INFO - start checking table partition_range_test_4_1_prt_mar ...
2024-04-25 18:44:41,546 - INFO - check table partition_range_test_4_1_prt_mar OK.
2024-04-25 18:44:41,546 - INFO - start checking table partition_range_test_4_1_prt_feb ...
2024-04-25 18:44:41,572 - INFO - check table partition_range_test_4_1_prt_feb OK.
2024-04-25 18:44:41,572 - INFO - start checking table partition_range_test_4_1_prt_jan ...
2024-04-25 18:44:41,594 - INFO - check table partition_range_test_4_1_prt_jan OK.
2024-04-25 18:44:41,594 - INFO - start checking table partition_range_test_4_1_prt_others ...
2024-04-25 18:44:41,611 - INFO - check table partition_range_test_4_1_prt_others OK.
2024-04-25 18:44:41,656 - INFO - Current progress: have 5 remaining, 0.34 seconds passed.
2024-04-25 18:44:41,667 - INFO - start checking table partition_range_test_1_1_prt_mar ...
2024-04-25 18:44:41,703 - INFO - check table partition_range_test_1_1_prt_mar OK.
2024-04-25 18:44:41,703 - INFO - start checking table partition_range_test_1_1_prt_feb ...
2024-04-25 18:44:41,729 - INFO - check table partition_range_test_1_1_prt_feb OK.
2024-04-25 18:44:41,729 - INFO - start checking table partition_range_test_1_1_prt_others ...
2024-04-25 18:44:41,746 - INFO - check table partition_range_test_1_1_prt_others OK.
2024-04-25 18:44:41,791 - INFO - Current progress: have 4 remaining, 0.48 seconds passed.
2024-04-25 18:44:41,802 - INFO - start checking table partition_range_test_2_1_prt_mar ...
2024-04-25 18:44:41,844 - INFO - check table partition_range_test_2_1_prt_mar OK.
2024-04-25 18:44:41,844 - INFO - start checking table partition_range_test_2_1_prt_feb ...
2024-04-25 18:44:41,865 - INFO - check table partition_range_test_2_1_prt_feb OK.
2024-04-25 18:44:41,865 - INFO - start checking table partition_range_test_2_1_prt_jan ...
2024-04-25 18:44:41,887 - INFO - check table partition_range_test_2_1_prt_jan OK.
2024-04-25 18:44:41,931 - INFO - Current progress: have 3 remaining, 0.62 seconds passed.
2024-04-25 18:44:41,943 - INFO - start checking table testddlwithnodata_1_prt_feb ...
2024-04-25 18:44:41,983 - INFO - check table testddlwithnodata_1_prt_feb OK.
2024-04-25 18:44:41,983 - INFO - start checking table testddlwithnodata_1_prt_jan ...
2024-04-25 18:44:42,005 - INFO - check table testddlwithnodata_1_prt_jan OK.
2024-04-25 18:44:42,049 - INFO - Current progress: have 2 remaining, 0.74 seconds passed.
2024-04-25 18:44:42,060 - INFO - start checking table testddlwithdata_1_prt_mar ...
2024-04-25 18:44:42,101 - INFO - check table testddlwithdata_1_prt_mar OK.
2024-04-25 18:44:42,101 - INFO - start checking table testddlwithdata_1_prt_feb ...
2024-04-25 18:44:42,122 - INFO - check table testddlwithdata_1_prt_feb OK.
2024-04-25 18:44:42,122 - INFO - start checking table testddlwithdata_1_prt_jan ...
2024-04-25 18:44:42,143 - INFO - check table testddlwithdata_1_prt_jan OK.
2024-04-25 18:44:42,143 - INFO - start checking table testddlwithdata_1_prt_others ...
2024-04-25 18:44:42,161 - INFO - check table testddlwithdata_1_prt_others OK.
2024-04-25 18:44:42,206 - INFO - Current progress: have 1 remaining, 0.89 seconds passed.
2024-04-25 18:44:42,217 - INFO - start checking table testddl_1_prt_feb ...
2024-04-25 18:44:42,258 - INFO - check table testddl_1_prt_feb OK.
2024-04-25 18:44:42,258 - INFO - start checking table testddl_1_prt_jan ...
2024-04-25 18:44:42,279 - INFO - check table testddl_1_prt_jan OK.
2024-04-25 18:44:42,279 - INFO - start checking table testddl_1_prt_others ...
2024-04-25 18:44:42,296 - INFO - check table testddl_1_prt_others OK.
2024-04-25 18:44:42,340 - INFO - Current progress: have 0 remaining, 1.03 seconds passed.
2024-04-25 18:44:42,340 - INFO - worker[0]: finish.

Run the utility using the migrate subcommand to rebuild the partitions that have their rows in an incorrect order after the upgrade.

python el8_migrate_locale migrate --input prepare_table.out

Verify the Changes

Check the output of validate and migrate for any errors.

For example, you can run the following command to verify the changes for tables:

el8_migrate_locale validate --output validate.out

If there are no tables listed in the output, the issue has been resolved.

Operating System Configuration Differences

When you prepare your operating system environment for Greenplum Database software installation, there are different configuration options depending on the version of your operating system. See Configuring Your Systems and Using Resource Groups for detailed documentation. This section summarizes the main differences to take into consideration when you upgrade from EL 7 to EL 8 or EL 9 regardless of the upgrade method you use.

XFS Mount Options

XFS is the preferred data storage file system on Linux platforms. Use the mount command with the following recommended XFS mount options. The nobarrier option is not supported on EL 8/9 or Ubuntu systems. Use only the options rw,nodev,noatime,inode64.

Disk I/O Settings

The Linux disk scheduler orders the I/O requests submitted to a storage device, controlling the way the kernel commits reads and writes to disk. A typical Linux disk I/O scheduler supports multiple access policies. The optimal policy selection depends on the underlying storage infrastructure. For EL 8/9, use the following recommended scheduler policy:

Storage Device Type Recommended Scheduler Policy
Non-Volatile Memory Express (NVMe) none
Solid-State Drives (SSD) none
Other mq-deadline

To specify the I/O scheduler at boot time for EL 8 you must either use TuneD or uDev rules. See the Redhat 8 Documentation or Redhat 9 Documentation]for full details.

Synchronizing System Clocks

You must use NTP (Network Time Protocol) to synchronize the system clocks on all hosts that comprise your Greenplum Database system. Accurate time keeping is essential to ensure reliable operations on the database and data integrity. You may either configure the master as the NTP primary source and the other hosts in the cluster connect to it, or configure an external NTP primary source and all hosts in the cluster connect to it. For EL 8/9, use the Chrony service to configure NTP.

Configuring and Using Resource Groups

Greenplum Database resource groups use Linux Control Groups (cgroups) to manage CPU resources. Greenplum Database also uses cgroups to manage memory for resource groups for external components. With cgroups, Greenplum isolates the CPU and external component memory usage of your Greenplum processes from other processes on the node. This allows Greenplum to support CPU and external component memory usage restrictions on a per-resource-group basis.

If you are using Redhat 8.x or 9, make sure that you configured the system to mount the cgroups-v1 filesystem by default during system boot. See Using Resource Groups for more details.

check-circle-line exclamation-circle-line close-line
Scroll to top icon