This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

This topic describes the workflow of the Greenplum Database upgrade process. Learn what the different gpupgrade phases involve in order to estimate how long it takes and what are the requirements your environment needs to meet for a successful upgrade.

Note

In this documentation, we refer to the existing Greenplum Database 5 installation as the source cluster and to the new Greenplum Database 6 installation as the target cluster. They both reside in the same set of hosts.

High Level Workflow

  • You upgrade the source cluster to the latest Greenplum Database 5x version.
  • You install the latest gpupgrade and Greenplum Database 6.x version for the target cluster.
  • gpupgrade initializes a fresh target cluster in the same set of hosts as the source cluster.
  • gpupgrade uses pg_upgrade to upgrade the data and catalog in-place into the target cluster.
  • The target cluster becomes the source cluster.

gpupgrade Workflow

The following diagram shows the different phases of the Greenplum upgrade process flow:

gpupgrade process

Important

The upgrade process requires downtime. We refer to the downtime required to perform the upgrade as the upgrade window. gpupgrade allows you to perform some of the steps to prepare your cluster for the upgrade before entering the upgrade window. Read the documentation carefully to understand how the process works.

Pre-upgrade

  • Perform the pre-upgrade preparatory actions a few weeks before the upgrade date.
  • During this phase you review the relevant documentation pages, upgrade the source cluster to the latest Greenplum 5.x version, and download and install the latest Greenplum 6.x version and the latest gpupgrade utility software.
  • For detailed steps in this phase, see Perform the Pre-Upgrade Actions.

Initialize

  • During this phase the source cluster is still running.
  • You run the command gupupgrade initialize, which generates scripts that collect statistics and check for catalog inconsistencies between source and target cluster.
  • The execution of the scripts require downtime: you may schedule maintenance windows for this purpose before you continue with the upgrade during the upgrade window.
  • You may cancel the process during this phase by reverting the upgrade.
  • The gpupgrade initialize command the starts the hub and agent processes, intializes the target cluster, and runs checks against the source cluster, including pg_upgrade --check.
  • For information about this phase, see Initialize the Upgrade (gpupgrade initialize).

Execute

  • You must perform the tasks in this phase within the upgrade window.
  • You run the command gpupgrade execute, which stops the source cluster and upgrades the master and primary segments using pg_upgrade.
  • You verify the target cluster before you choose to finalize or revert the upgrade.
  • For information about this phase, see Run the Upgrade (gpupgrade execute).

Finalize

  • You must perform the tasks in this phase within the upgrade window
  • You run the gpupgrade finalize command, which upgrades the segment mirrors and standby master.
  • You cannot revert the upgrade once you enter this phase.
  • For information about this phase, see Finalize the Upgrade (gpupgrade finalize).

Post-upgrade

  • Once the upgrade is completed, you edit configuration and user-defined files, and other cleanup tasks.
  • For information about this phase, see Perform the Post-Upgrade Actions.

Revert

  • You must perform the tasks in this phase within the upgrade window
  • You run the command gpupgrade revert, which restores the cluster to the state it was before you ran gpupgrade.
  • You may run gpupgrade revert after the Initialize or Execute phases, but not after Finalize.
  • The revert speed depends on when you are running the command, and the upgrade mode you chose.
  • If the source cluster has no standby master and segment mirrors, you cannot revert after the Execute phase if using link mode.
  • For information about this phase, see Reverting the Upgrade (gpupgrade revert).

Example

The following example illustrates using the gpupgrade utility for an environment with minimal set up. It assumes that you have Greenplum 5.28.10 on the source cluster, and that you have no installed extensions.

  1. Install the latest target Greenplum version on all hosts.

    gpscp -f all_hosts greenplum-db-6.24.0*.rpm =:/tmp
    gpssh -f all_hosts -v -e 'sudo yum install -y /tmp/greenplum-db-6.24.0*.rpm'
    
  2. Install gupgrade on all hosts.

    gpscp -f all_hosts gpupgrade*.rpm =:/tmp
    gpssh -f all_hosts -v -e 'sudo yum install -y /tmp/gpupgrade*rpm'
    
  3. Copy the example config file to $HOME/gpupgrade/ (you must create this directory first) and update the required parameters source_gphome, target_gphome, source_master_port.

    cp /usr/local/bin/greenplum/gpupgrade/gpupgrade_config  $HOME/gpupgrade/
    
  4. Run gpupgrade initialize:

    gpupgrade initialize --verbose --file $HOME/gpupgrade/gpupgrade_config
    
  5. Run gpupgrade execute:

    gpupgrade execute --verbose
    
  6. Run gpupgrade finalize:

    gpupgrade finalize --verbose
    
  7. Update the Greenplum symlink so it points to the new installation:

    gpssh -f all_hosts -v -e 'sudo rm /usr/local/greenplum-db && sudo ln -s /usr/local/greenplum-db-6.24.0 /usr/local/greenplum-db'
    
  8. Start the Target Cluster (in a new shell):

    source /usr/local/greenplum-db-6.24.0/greenplum_path.sh
    gpstart -a
    
check-circle-line exclamation-circle-line close-line
Scroll to top icon