This topic describes how to upgrade VMware Postgres from a previous major version to v11.18.0.
VMware Postgres 11.18 supports the ability to upgrade from any prior VMware Postgres major version to 11.18.0. The steps rely on the open source Postgres pg_upgrade
functionality.
pg_upgrade
. The default pg_upgrade
setting copies all data files from the old to the new server. Alternatively, review the --link
option, that provides reduced upgrade time and disk space benefits but impacts the revert process. For more details see Reverting an old cluster. For a list of all the pg_upgrade
options see Options in the pg_upgrade Open Source Postgres documentation.Start from step 3 in the Usage topic of the pg_upgrade Open Source Postgres documentation.
Replace the commands in step 3, Install the new PostgreSQL binaries, with the following command that installs the new VMware Postgres binaries:
cd ~/Downloads
sudo yum install vmware-postgres11-11.18-0.el7.x86_64.rpm
Follow the remaining steps as outlined in pg_upgrade.
If you are upgrading a minor VMware Postgres installation (for example 11.0) to a later VMware Postgres minor release, use the yum update
command to install the new RPM:
cd ~/Downloads
sudo yum update vmware-postgres<version>-<postgres-version>.el7.x86_64.rpm
Restart the Postgres server:
su postgres
pg_ctl restart
Note that the Postgres JDBC driver is not installed as an RPM. Simply extract and use the newer JAR file in your CLASSPATH
.
This topic provides best practices and steps to perform a VMware Postgres upgrade in a Patroni cluster. The instructions help you perform a minor version upgrade, for example from 11.7 to 11.18, by the fastest means possible, and with minimal to no downtime. Major version upgrades, for example 11.7 to 12.11, are not covered in this document.
The instructions are tested with a VMware Postgres HA cluster configured per Setting Up a Patroni Cluster with VMware Postgres.
PostgreSQL minor updates, such as 11.7 to 11.8, are binary updates only. The PostgreSQL community has an excellent reputation for making minor upgrades as painless as possible. However, always read the release notes for the new Postgres version, especially if you are skipping minor versions, such as 11.5 to 11.8.
When updating Postgres packages and dependencies, it is best to focus on only the necessary packages. Running yum -y update
is not recommended because there may be many available updates, lengthening the upgrade process. Yum updates may also include a new kernel, requiring a reboot.
If you have set up pgbouncer running on a single Postgres host, pause pgbouncer before you upgrade VMware Postgres.
Following is the recommended upgrade process when you are using pgbouncer.
Connect to pgbouncer and execute the PAUSE
command to suspend all future client connections.
Note: You must maintain the connection to pgbouncer after issuing PAUSE
to ensure that pgbouncer does not un-pause.
Start a second terminal and wait for all existing client queries to complete.
Create a backup of the cluster using your preferred method.
Upgrade the Patroni Postgres cluster, as described in Upgrading VMware Postgres with Patroni.
Back in the first terminal, execute the pgbouncer RESUME
command.
If you have set up HAProxy, it is important to validate your HAproxy configuration and operation prior to starting the VMware Postgres upgrade. If the read-write pool for HAproxy is not functioning properly clients will lose connectivity to the primary database at some point during the upgrade.
This is a high-level overview of the process for upgrading a Patroni/etcd Postres cluster:
When upgrading VMware Postgres, always begin with the standby nodes. It helps to identify problems with the upgrade without affecting the primary node. Do not upgrade the primary node until the standby nodes are upgraded and validated. If you encounter any problems while performing the upgrade, stop and debug.
Procedure:
Download the RPM for the new version of VMware from VMware Tanzu Network and copy it to all of the data nodes.
Update or install any new Patroni dependancies:
# su - postgres
$ pip3 install --user 'ydiff>=1.2.0'
$ pip3 install --user 'six>=1.7'
Stop the Patroni process (and postgres) on the standby you are upgrading. Use your process manager to stop patroni if you used one (e.g. systemctl stop patroni
), or kill top-level patroni process (e.g. kill <pid>
). When the patroni cluster is running (not paused/in maintenance mode) killing this process will also stop postgres for you.
Run the yum update
command to upgrade VMware Postgres. Currently, Patroni is packaged in the same rpm as postgres, so it will be updated as well.
# sudo yum update -y vmware-postgres<version>-<postgres-version>.el7.x86_64.rpm
Start Patroni with systemd or the following:
$ patroni patroni.yml > patroni.log 2>&1 &
Make sure the process is running and check the patroni.log
log file if not.
Note: In previous versions of this documentation, the Patroni configuration file was called postgresql0.yml
. Replace the configuration file name name in the commands with the name of yours.
Validate that it has rejoined the cluster successfully:
$ patronictl -c patroni.yml list
+-----------------------------------+-----------+---------+---------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+ Cluster: patroni_cluster (7093641312106653939) ---------+---------+----+-----------+
| postgreshost1 | 10.10.1.1 | Leader | running | 1 | |
| postgreshost2 | 10.10.1.2 | Replica | running | | 0 |
| postgreshost3 | 10.10.1.3 | Replica | running | | 0 |
+-----------------------------------+-----------+---------+---------+----+-----------+
Repeat steps 2 - 6 for the remaining standby nodes.
After the standby nodes are upgraded and verified, you can upgrade the current primary node.
Use Patroni to switch the primary to one of the upgraded Postgres instances.
# sudo su - postgres
$ patronictl -c patroni.yml switchover
Master [postgreshost1]:
Candidate ['postgreshost2', 'postgreshost3'] []: postgreshost2
When should the switchover take place (e.g. 2020-07-30T15:57 ) [now]: now
Current cluster topology
+-----------------------------------+-----------+--------+---------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+ Cluster: patroni_cluster (7093641312106653939) --------+---------+----+-----------+
| postgreshost1 | 10.10.1.1 | Leader | running | 1 | |
| postgreshost2 | 10.10.1.2 | | running | | 0 |
| postgreshost3 | 10.10.1.3 | | running | | 0 |
+-----------------------------------+-----------+--------+---------+----+-----------+
Are you sure you want to switchover cluster patroni_cluster, demoting current master postgreshost1? [y/N]: y
2020-07-30 14:58:11.56393 Successfully switched over to "postgreshost2"
+-----------------------------------+-----------+--------+---------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+ Cluster: patroni_cluster (7093641312106653939) --------+---------+----+-----------+
| postgreshost1 | 10.10.1.1 | | stopped | | unknown |
| postgreshost2 | 10.10.1.2 | Leader | running | | |
| postgreshost3 | 10.10.1.3 | | running | | 0 |
+-----------------------------------+-----------+--------+---------+----+-----------+
Follow steps 2 - 6 for upgrading the standby nodes on the former primary node.
(Optional) Switch the primary back to the original primary node.
$ patronictl -c patroni.yml switchover
Master [postgreshost2]:
Candidate ['postgreshost1', 'postgreshost3'] []: postgreshost1
When should the switchover take place (e.g. 2020-07-31T10:14 ) [now]:
Current cluster topology
+-----------------------------------+-----------+---------+---------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+ Cluster: patroni_cluster (7093641312106653939) ---------+---------+----+-----------+
| postgreshost1 | 10.10.1.1 | Replica | running | | 0 |
| postgreshost2 | 10.10.1.2 | Leader | running | 2 | |
| postgreshost3 | 10.10.1.3 | Replica | running | | 0 |
+-----------------------------------+-----------+---------+---------+----+-----------+
Are you sure you want to switchover cluster patroni_cluster, demoting current master postgreshost2? [y/N]: y
2020-07-31 09:14:39.12718 Successfully switched over to "postgreshost1"
+-----------------------------------+-----------+---------+---------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+ Cluster: patroni_cluster (7093641312106653939) ---------+---------+----+-----------+
| postgreshost1 | 10.10.1.1 | Leader | running | 2 | |
| postgreshost2 | 10.10.1.2 | Replica | stopped | | unknown |
| postgreshost3 | 10.10.1.3 | Replica | running | 2 | 0 |
+-----------------------------------+-----------+---------+---------+----+-----------+
$ patronictl -c patroni.yml list
+-----------------------------------+-----------+---------+---------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+ Cluster: patroni_cluster (7093641312106653939) ---------+---------+----+-----------+
| postgreshost1 | 10.10.1.1 | Leader | running | 3 | |
| postgreshost2 | 10.10.1.2 | Replica | running | 3 | 0 |
| postgreshost3 | 10.10.1.3 | Replica | running | 3 | 0 |
+-----------------------------------+-----------+---------+---------+----+-----------+
When all nodes are running, the VMware Postgres upgrade is complete.