This topic describes how to upgrade VMware Postgres from a previous major version to the latest 12.x.

Upgrading from an Older Major Version

VMware Postgres 12 supports the ability to upgrade from any prior VMware Postgres major version to the latest 12.x. The steps rely on the open source Postgres pg_upgrade functionality.

Prerequisites

  • Download the VMware Postgres Server RPM distribution from VMware Tanzu Network.
  • Any extensions installed on the older Postgres server must be manually added to the new release. Alternatively, remove them from the older Postgres cluster before starting the upgrade.
  • Confirm you have enough disk space to run pg_upgrade. The default pg_upgrade setting copies all data files from the old to the new server. Alternatively, review the --link option, that provides reduced upgrade time and disk space benefits but impacts the revert process. For more details see Reverting an old cluster. For a list of all the pg_upgrade options see Options in the pg_upgrade Open Source Postgres documentation.

Procedure

  1. Download the target VMware Postgres Server RPM distribution from VMware Tanzu Network. The Postgres Server download filename has the format: vmware-postgres-<vmware-version>.<platform>.<architecture>.zip, for example vmware-postgres-12.16.0.el7.x86_64.zip.

  2. Log in to your system as root, or use sudo in the next step to acquire root privileges.

  3. Start from step 3 in the Usage topic of the pg_upgrade Open Source Postgres documentation.

    Replace the commands in step 3, Install the new PostgreSQL binaries, with the following command that installs the new VMware Postgres binaries:

    cd ~/Downloads
    sudo yum install vmware-postgres-12.16-0.el7.x86_64.rpm
    
  4. Follow the remaining steps as outlined in pg_upgrade.

Upgrading from an Older Minor Version

If you are upgrading a minor VMware Postgres installation to a later VMware Postgres minor release (for example 11.2 to 11.6), use the yum update command to install the new RPM:

cd ~/Downloads
sudo yum update vmware-postgres<version>-<postgres-version>.el7.x86_64.rpm

Restart the Postgres server:

su postgres
pg_ctl restart

Note that the Postgres JDBC driver is not installed as an RPM. Simply extract and use the newer JAR file in your CLASSPATH.

Upgrading VMware Postgres Minor Version in a Patroni Cluster

Use this procedure if you have an HA architecture using Patroni, and you wish to upgrade the VMware Postgres minor version, for example 12.7 to 12.8. Major version upgrades, for example 12.7 to 13.0, are not covered in this document.

Prerequisites

  • This document provides instructions for RHEL-based distributions, such as Red Hat, CentOS, and Oracle Enterprise Linux. This document does not cover Debian-based distributions like Ubuntu.
  • The instructions assume that etcd is the DCS, and the etcd cluster uses the default ports.

The instructions are tested with a VMware Postgres HA cluster configured per Setting Up a Patroni Cluster with VMware Postgres.

PostgreSQL minor updates are binary updates only. Before any minor upgrade, review the release notes for the new Postgres version, especially if you are skipping minor versions, such as 12.3 to 12.16.

Best Practices for Upgrading VMware Postgres

Restrict Updates

When updating Postgres packages and dependencies, restrict the update to the necessary packages. Running yum -y update is not recommended because there may be many available updates, lengthening the upgrade process. Yum updates may also include a new kernel, requiring a reboot.

pgbouncer

If pgbouncer is running on a single Postgres host, follow these steps:

  1. Connect to pgbouncer and execute the PAUSE command to suspend all future client connections.

    Note: You must maintain the connection to pgbouncer after issuing PAUSE to ensure that pgbouncer does not un-pause.

  2. Start a second terminal and wait for all existing client queries to complete.

  3. Create a backup of the cluster using your preferred method.
  4. Upgrade the Patroni Postgres cluster, as described in Upgrade VMware Postgres with Patroni.
  5. Back in the first terminal, execute the pgbouncer RESUME command.

HAProxy

If you have set up HAProxy, it is important to validate your HAproxy configuration and operation prior to starting the VMware Postgres upgrade. If the read-write pool for HAproxy is not functioning properly clients will lose connectivity to the primary database at some point during the upgrade.

Upgrade VMware Postgres with Patroni

This is a high-level overview of the process for upgrading a Patroni/etcd Postres cluster:

  1. Execute the upgrade steps for a single Postgres node on each standby Postgres node.
  2. Manually switch the Postgres primary to an upgraded standby node.
  3. Execute the upgrade steps for a single Postgres node on the former primary node.
  4. Optionally, switch the Postgres primary back to the original node.

Upgrade the Standby Nodes

When upgrading VMware Postgres, always begin with the standby nodes. It helps to identify problems with the upgrade without affecting the primary node. Do not upgrade the primary node until the standby nodes are upgraded and validated. If you encounter any problems while performing the upgrade, stop and debug.

Procedure:

  1. Download the RPM for the new version of VMware Postgres from VMware Tanzu Network and copy it to all of the data nodes.

  2. Update or install any new Patroni dependancies:

    su - postgres
    pip3 install --user 'ydiff>=1.2.0'
    pip3 install --user 'six>=1.7'
    
  3. Stop the Patroni process (and postgres) on the standby you are upgrading. Use your process manager to stop patroni if you used one (e.g. systemctl stop patroni), or kill top-level patroni process (e.g. kill <pid>). When the patroni cluster is running (not paused/in maintenance mode) killing this process will also stop postgres for you.

  4. Run the yum update command to upgrade VMware Postgres. Currently, Patroni is packaged in the same rpm as postgres, so it will be updated as well.

    sudo yum update -y vmware-postgres<version>-<postgres-version>.el7.x86_64.rpm
    
  5. Start Patroni with systemd or the following:

    patroni patroni.yml > patroni.log 2>&1 &
    

    Make sure the process is running and check the patroni.log log file if not.

    Note: In previous versions of this documentation, the Patroni configuration file was called postgresql0.yml. Replace the configuration file name name in the commands with the name of yours.

  6. Validate that it has rejoined the cluster successfully:

    $ patronictl -c patroni.yml list
    +-----------------------------------+-----------+---------+---------+----+-----------+
    |               Member              |    Host   |  Role   |  State  | TL | Lag in MB |
    + Cluster: patroni_cluster (7093641312106653939) ---------+---------+----+-----------+
    | postgreshost1                     | 10.10.1.1 | Leader  | running |  1 |           |
    | postgreshost2                     | 10.10.1.2 | Replica | running |    |         0 |
    | postgreshost3                     | 10.10.1.3 | Replica | running |    |         0 |
    +-----------------------------------+-----------+---------+---------+----+-----------+
    
  7. Repeat steps 2 - 6 for the remaining standby nodes.

Upgrade the Primary Node

After the standby nodes are upgraded and verified, you can upgrade the current primary node.

  1. Use Patroni to switch the primary to one of the upgraded Postgres instances.

    sudo su - postgres
    
    patronictl -c patroni.yml switchover
    
    Master [postgreshost1]:
    Candidate ['postgreshost2', 'postgreshost3'] []: postgreshost2
    When should the switchover take place (e.g. 2020-07-30T15:57 )  [now]: now
    
    Current cluster topology
    +-----------------------------------+-----------+--------+---------+----+-----------+
    |               Member              |    Host   |  Role  |  State  | TL | Lag in MB |
    + Cluster: patroni_cluster (7093641312106653939) --------+---------+----+-----------+
    | postgreshost1                     | 10.10.1.1 | Leader | running |  1 |           |
    | postgreshost2                     | 10.10.1.2 |        | running |    |         0 |
    | postgreshost3                     | 10.10.1.3 |        | running |    |         0 |
    +-----------------------------------+-----------+--------+---------+----+-----------+
    Are you sure you want to switchover cluster patroni_cluster, demoting current master postgreshost1? [y/N]: y
    
    2020-07-30 14:58:11.56393 Successfully switched over to "postgreshost2"
    +-----------------------------------+-----------+--------+---------+----+-----------+
    |               Member              |    Host   |  Role  |  State  | TL | Lag in MB |
    + Cluster: patroni_cluster (7093641312106653939) --------+---------+----+-----------+
    | postgreshost1                     | 10.10.1.1 |        | stopped |    |   unknown |
    | postgreshost2                     | 10.10.1.2 | Leader | running |    |           |
    | postgreshost3                     | 10.10.1.3 |        | running |    |         0 |
    +-----------------------------------+-----------+--------+---------+----+-----------+
    
  2. Follow steps 2 - 6 for upgrading the standby nodes on the former primary node.

  3. (Optional) Switch the primary back to the original primary node.

    patronictl -c patroni.yml switchover
    

    The output is similar to:

    Master [postgreshost2]:
    Candidate ['postgreshost1', 'postgreshost3'] []: postgreshost1
    When should the switchover take place (e.g. 2020-07-31T10:14 )  [now]:
    Current cluster topology
    +-----------------------------------+-----------+---------+---------+----+-----------+
    |               Member              |    Host   |  Role   |  State  | TL | Lag in MB |
    + Cluster: patroni_cluster (7093641312106653939) ---------+---------+----+-----------+
    | postgreshost1                     | 10.10.1.1 | Replica | running |    |         0 |
    | postgreshost2                     | 10.10.1.2 | Leader  | running |  2 |           |
    | postgreshost3                     | 10.10.1.3 | Replica | running |    |         0 |
    +-----------------------------------+-----------+---------+---------+----+-----------+
    Are you sure you want to switchover cluster patroni_cluster, demoting current master postgreshost2? [y/N]: y
    2020-07-31 09:14:39.12718 Successfully switched over to "postgreshost1"
    +-----------------------------------+-----------+---------+---------+----+-----------+
    |               Member              |    Host   |  Role   |  State  | TL | Lag in MB |
    + Cluster: patroni_cluster (7093641312106653939) ---------+---------+----+-----------+
    | postgreshost1                     | 10.10.1.1 | Leader  | running |  2 |           |
    | postgreshost2                     | 10.10.1.2 | Replica | stopped |    |   unknown |
    | postgreshost3                     | 10.10.1.3 | Replica | running |  2 |         0 |
    +-----------------------------------+-----------+---------+---------+----+-----------+
    

    Validate the switch using:

    patronictl -c patroni.yml list
    
    +-----------------------------------+-----------+---------+---------+----+-----------+
    |               Member              |    Host   |  Role   |  State  | TL | Lag in MB |
    + Cluster: patroni_cluster (7093641312106653939) ---------+---------+----+-----------+
    | postgreshost1                     | 10.10.1.1 | Leader  | running |  3 |           |
    | postgreshost2                     | 10.10.1.2 | Replica | running |  3 |         0 |
    | postgreshost3                     | 10.10.1.3 | Replica | running |  3 |         0 |
    +-----------------------------------+-----------+---------+---------+----+-----------+
    

    When all nodes are running, the VMware Postgres upgrade is complete.

check-circle-line exclamation-circle-line close-line
Scroll to top icon