The gpupgrade execute command transforms the source Greenplum Database system to be compatible with the target Greenplum Database software. It updates the master segment instance, copies data and configuration files to the target cluster, and upgrades the primary segment instances. When gpupgrade execute completes, the target cluster is running, and available for you to test.

The source standby master and mirror segment instances are unchanged until you run the gpupgrade finalize command.

Perform the execute phase during a scheduled downtime. Users should receive sufficient notice that the Greenplum Database cluster will be off-line for an extended period. Send a maintenance notice a week or more before you plan to start the execute phase, and then a reminder notice before you begin.

The following table summarises the cluster state before and after gpupgrade execute:

Before Execute After Execute
Source Target Source Target
Master UP Initialized but DOWN DOWN UP and populated
Standby UP Non Existent DOWN Non Existent
Primaries UP Initialized but DOWN DOWN UP and populated
Mirrors UP Non Existent DOWN Non Existent

Execute Workflow Summary

The gpupgrade execute command performs the following steps:

  1. Stops the source Greenplum cluster.
  2. Upgrades the master instance on the target cluster.
  3. Copies the upgraded master catalog to all primary segments of the target cluster.
  4. Upgrades the primary segments.
  5. Starts the target Greenplum Database cluster.

Preparing for Execute

You can run gpupgrade execute after the gpupgrade Initialize Phase is finished.

  • Ensure you are in an pre-agreed downtime window. While gpupgrade execute runs, the source Greenplum Database cluster is unavailable. The execute phase can take a long time to complete, so you should wait for a scheduled downtime to start gpupgrade execute.
  • Check for any open connections to the source Greenplum database and close any applications that might be trying to access the cluster. gpupgrade execute will check for open connections and output an error if it finds any.

WARNING: If using link mode, and the source Greenplum cluster does not have a standby host and mirrors, gpupgrade generates a warning:
The source cluster does not have standby and/or mirror segments.
After “gpupgrade execute” has been run, there will be no way to
return the cluster to its original state using “gpupgrade revert”.

Running the gpupgrade execute Command

Log in to the master host as the gpadmin user and run the gpupgrade execute command.

$ gpupgrade execute

The utility displays a summary message and waits for user confirmation before proceeding:

You are about to run the "execute" command for a major-version upgrade of Greenplum.
This should be done only during a downtime window.

...

You will still have the opportunity to revert the cluster to its original state
after this step.

WARNING: Do not perform operations on the source cluster until gpupgrade is
finalized or reverted.

Continue with gpupgrade execute?  Yy|Nn:

gpupgrade displays progress as it executes the upgrade tasks:

Execute in progress.

Stopping source cluster...                                         [COMPLETE]   
Upgrading master...                                                [COMPLETE]   
Copying master catalog to primary segments...                      [COMPLETE]   
.......

The status of each step can be COMPLETE, FAILED, SKIPPED, or IN PROGRESS. SKIPPED indicates that the command has been run before and the step has already been executed.

When gpupgrade execute has completed successfully, gpupgrade reports on the state of the source and target clusters and their master listen ports and data directories.

In summary, gpupgrade execute performs the following tasks:

  • Stops the source cluster.
  • Runs pg_upgrade to upgrade the master instance on the target.
  • Re-runs the pg_upgrade consistency checks. To see the pg_upgrade output run execute in verbose mode; to generate further debugging logging use gpupgrade execute --pg-upgrade-verbose --verbose.

    $ gpupgrade execute --verbose 
    
    ...
    Upgrading master...                                                [IN PROGRESS]
    Performing Consistency Checks
    -----------------------------
    Checking cluster versions                                   ok
    Checking database user is a superuser                       ok
    Checking database connection settings                       ok
    Checking for prepared transactions                          ok
    Checking for reg* system OID user data types                ok
    Checking for contrib/isn with bigint-passing mismatch       ok
    ..........
    

    See pg_upgrade Consistency Checks for more information.

  • Copies the master catalog to all primary segments on the target cluster.
  • Runs pg_upgrade to upgrade the primary segment instances in parallel.
  • Starts the target cluster.

Connecting to the Target Cluster

The target Greenplum Database cluster is running with new, temporary connection parameters, which you must specify when you connect to the cluster. The output of the gpupgrade execute command shows the values for the MASTER_DATA_DIRECTORY and PGPORT environment variables.

The MASTER_DATA_DIRECTORY directory name is the target cluster master directory name modified by inserting a hash code. See Target Cluster Directories for more information about the target cluster directory names.

The default master listen port for the target cluster is 50432. If you changed the temp_port_range from the default 50432-65535 in the gpupgrade initialize configuration file, the master port will be the first port in the list you specified.

Source the greenplum_path.sh file in the target Greenplum Database installation directory to set the path and other environment variables.

This example sets the environment to connect to the target cluster and runs the gpstate utility:

$ export MASTER_DATA_DIRECTORY="/data/master/gpseg.AAAAAAAAAA.-1"
$ export PGPORT=50432
$ source <target-gpdb-install-dir>/greenplum_path.sh
$ gpstate

Running gpstate against the target cluster at this point should show active master and primary segments, but the master standby and mirror segments are not yet configured. The standby master and mirror segments are upgraded when you run gpupgrade finalize.

To access the source cluster again after you have changed the environment variables, you must either reset the variables to their original values or log out and log in again to allow the start-up scripts to set the variables back to the values for the source cluster.

Troubleshooting the Execute Phase

The gpupgrade execute hub process runs on the Greenplum Database master host and logs messages to the gpAdminLogs/gpupgrade/hub.log file in the gpadmin user’s home directory.

Message: “Failed to connect to the upgrade hub”

  • You must run gpupgrade initialize before you can run gpupgrade execute. If you already ran gpupgrade initialize try running gpupgrade restart-services to restart the hub and agent processes.

  • Verify that gpupgrade is installed in the same path on all hosts in the cluster.

Message: “Could not create the $HOME/gpAdminLogs/gpupgrade/hub.log file”

  • Verify that you are logged in as gpadmin and that all files in the .gpupgrade and gpAdminLogs directories are owned by gpadmin and are writable by gpadmin.

Message: “Failed to start an agent on a segment host”

  • Verify that the segment hosts are running and the gpadmin user can log in with SSH.

  • Verify that gpupgrade is installed in the same location on all hosts in the Greenplum Database cluster.

  • The gpupgrade agent processes listen on port 6416 by default. Stop any application using port 6416 on any host in the cluster.

Symptom: The gpupgrade_hub process fails to start or crashes

  • Check the $HOME/gpAdminLogs/gpupgrade/hub.log file for messages that identify the problem causing the failure.

  • The gpupgrade hub process listens on port 7527. Stop any other application that is using port 7527 on the master host.

Message: “Disk full on master or segment host”

  • Delete unneeded files on affected hosts and run gpupgrade execute again.

Symptom: Target cluster fails to start

  • Check the gpinitsystem and gpstart log files in the ~/gpAdminLogs directory.

Next Steps

Once the gpupgrade execute command has completed successfully, you must test the upgraded cluster and decide whether to finalize the upgrade (see gpupgrade Finalize Phase) or return to the source Greenplum Database version see gpupgrade Revert.

While you test the cluster upgrade, do not change the database. Any changes that are made during the execute phase will persist in the target cluster and cause inconsistencies if you decide to finalize the upgrade.

check-circle-line exclamation-circle-line close-line
Scroll to top icon