This topic describes how to upgrade VMware Postgres from a major or a minor version.

If you are upgrading from a previous major release (10.x, 11.x or 12.x), follow the steps in Upgrading from an Older Major Version.

If you are upgrading from an older minor release (for example 13.2 to 13.13), follow the steps in Upgrading from an Older Minor Version.

Upgrading from an Older Major Version

VMware Postgres 13 supports the ability to upgrade from prior VMware Postgres major versions to 13.x. The steps rely on the open source Postgres pg_upgrade functionality.

Important: Customers using the PostGIS extension cannot upgrade to 13.x from prior major versions, unless they upgrade their Postgres 10X, 11X, 12X to the latest version, and upgrade PostGIS to a 3.X version.

Prerequisites

  • Download the VMware Postgres Server RPM distribution from Broadcom Support Portal.

    Note

    For more information about download prerequisites, troubleshooting, and instructions, see Download Broadcom products and software.

  • Any extensions installed on the older Postgres server must be manually added to the new release. Alternatively, remove them from the older Postgres cluster before starting the upgrade.
  • Confirm you have enough disk space to run pg_upgrade. The default setting copies all data files from the old to the new server. Alternatively, review the --link option, that provides reduced upgrade time and disk space benefits but impacts the revert process, see Reverting to old cluster. For a list of all the pg_upgrade options see Options in the pg_upgrade Open Source Postgres documentation.

Procedure

  1. Start from step 3 in the Usage topic of the pg_upgrade Open Source Postgres documentation.

    Replace the commands in step 3, Install the new PostgreSQL binaries, with the following command that installs the new VMware Postgres binaries:

    cd ~/Downloads
    
    sudo yum install vmware-postgres13-13.12.0.el7.x86_64.rpm
    
  2. Follow the remaining steps as outlined in pg_upgrade.

Upgrading from an Older Minor Version

To upgrade to VMware Postgres 13.x from an older minor version, use the following steps:

Minor to minor upgrade without HA

VMware Postgres 13.4 changed the installation files location from /usr to /opt/vmware/postgres/13, and the names of the installed packages from vmware-postgres to vmware-postgres<version>. Upgrades from versions before 13.4 need to ensure the new PATH is accounted for.

Follow these steps to upgrade from a release before 13.4:

  1. Download the minor VMware Postgres Server RPM distribution from VMware Tanzu Network.

  2. Install the new RPM:

    cd ~/Downloads/vmware-postgres-13
    

    Use your desired minor version with yum install, for example:

    sudo yum -y install ./vmware-postgres13-13.12-0.el7.x86_64.rpm
    
  3. Update any absolute paths to the Postgres executables that reference the old installation path of /usr. These files might be Postgres server configuration files, or any custom scripts you may have written. For example, if you have an archive_cleanup_command set up in postgresql.conf with an absolute path:

    archive_cleanup_command = '/usr/bin/pg_archivecleanup ... %r'
    

    remove the absolute path, or update it using pg_config --bindir which always points to the latest binary directory:

    archive_cleanup_command = '$(pg_config --bindir)/pg_archivecleanup ... %r'
    
  4. In your upgrade window, update the Postgres user PATH to point to the new installation location. For example, for Postgres 13, add the following line in the ~/postgres/.bash_profile:

    export PATH=/opt/vmware/postgres/13/bin:$PATH
    

    Verify that it has taken effect:

    su --login postgres
    
    which pg_ctl
    
    /opt/vmware/postgres/13/bin/pg_ctl
    
  5. Switch to the new installation by stopping and starting the server. Do not use the restart shortcut, because it will reuse the old installation:

    su --login postgres
    
    pg_ctl stop -l data/logfile
    
    pg_ctl start -l data/logfile
    

    Verify that the correct version is in use:

    psql
    
    psql (13.12 (VMware Postgres 13.12.0))
    Type "help" for help.
    
    postgres=# show server_version;
              server_version         
    ---------------------------------
     13.12 (VMware Postgres 13.12.0) 
    (1 row)
    
  6. After confirming that the new server is working correctly, remove the old Postgres package:

    sudo yum remove vmware-postgres  
    

Minor to minor upgrade with pg_auto_failover HA

To upgrade VMware to a later minor version while using pg_auto_failover for HA, follow the steps below. This process will also upgrade the pg_auto_failover version to the version packaged in the minor release you’re upgrading to.

Important: Upgrading VMware Postgres sets the HA cluster into maintenance mode. Perform this operation during an agreed maintenance window.

The following example uses systemd as the system service used to start pg_auto_failover. The example configuration uses one monitor, and two nodes, primary (node 1), and secondary (node 2).

  1. Download the minor VMware Postgres Server RPM distribution from VMware Tanzu Network.

  2. Install the new RPM on all HA hosts:

    cd ~/Downloads/vmware-postgres-13
    

    Issue yum install with the minor version you’re upgrading to, for example:

    sudo yum -y install vmware-postgres13-13.12-1.el7.x86_64.rpm
    
  3. Set the cluster into maintenance mode:

    pg_autoctl enable maintenance
    

On the monitor node:

  1. Stop the monitor:

    sudo systemctl stop pgautofailover
    
  2. Update the systemd /etc/systemd/system/pgautofailover.service, and set the ExecStart with the new binary:

    ssh <monitor_node> 
    
    vi /etc/systemd/system/pgautofailover.service
    

    Update the field ExecStart in the file:

    [Unit]
    Description = pg_auto_failover
    
    [Service]
    WorkingDirectory = /home/postgres
    Environment = 'PGDATA=/data/postgresql/node1'
    User = postgres
    ExecStart = /opt/vmware/postgres/13/bin/pg_autoctl run
    Restart = always
    StartLimitBurst = 0
    
    [Install]
    WantedBy = multi-user.target
    
  3. Create the monitor with the new pgctl binary:

    export PGDATA=/data/postgresql/monitor
    
    pg_autoctl create monitor \
        --auth trust \
        --ssl-self-signed \
        --pgdata $PGDATA \
        --hostname monitor \
        --pgctl /opt/vmware/postgres/13/bin/pg_ctl
    
  4. Reload systemd, start pg_auto_failover, and confirm the version has been updated to the correct minor release:

    sudo systemctl daemon-reload
    
    sudo systemctl start pgautofailover
    
    ps -ef | grep pg_autoctl | grep run
    

    With an output similar to:

    postgres  94437      1  0 12:22 ?        00:00:00 /opt/vmware/postgres/13/bin/pg_autoctl run
    
  5. Check that the upgrade updated the pg_auto_failover version:

    /opt/vmware/postgres/13/bin/pg_autoctl version
    

    For VMware Postgres 13.12, the output would be similar to:

    pg_autoctl version 1.6.3
    pg_autoctl extension version 1.6
    compiled with PostgreSQL 13.12 (VMware Postgres 13.12.0) on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.    5-44), 64-bit
    compatible with Postgres 10, 11, 12, 13, and 14
    
  6. Show the system state:

    /opt/vmware/postgres/13/bin/pg_autoctl show state
    

    For this example, the output is similar to:

    Name |  Node |  Host:Port |       TLI: LSN |   Connection |      Reported State |      Assigned State
    -------+-------+------------+----------------+--------------+---------------------+--------------------
    node_1 |     1 | node1:5432 |   1: 0/3359120 | read-write ! |             primary |             primary
    node_8 |     8 | node2:5432 |   1: 0/3359120 |  read-only ! |           secondary |           secondary
    

    Important: At this point, both node 1 and node 2 will be offline.

On the the primary node (node 1):

  1. Login into the node:

    ssh <node1>
    
  2. Stop pg_auto_failover:

    sudo systemctl stop pgautofailover
    
  3. Update systemd with the new binary location:

    vi /etc/systemd/system/pgautofailover.service
    

    and update ExecStart:

    ExecStart = /opt/vmware/postgres/13/bin/pg_autoctl run 
    
  4. Reload systemd, and restart pg_auto_failover:

    sudo systemctl daemon-reload
    
    sudo systemctl start pgautofailover
    
  5. Verify the correct PATH and version of pg_auto_failover:

    ps -ef | grep pg_auto
    
    postgres 121620      1  0 12:34 ?        00:00:00 /opt/vmware/postgres/13/bin/pg_autoctl run
    
  6. Update the binary path, and restart pg_auto_ctl:

    /opt/vmware/postgres/13/bin/pg_autoctl config set postgresql.pg_ctl '/opt/vmware/postgres/13/bin/pg_ctl'
    
    sudo systemctl restart pgautofailover
    
  7. Verify the operation:
    ps -ef | grep postgres | grep 5432 | grep -v grep
    

    With an output similar to:

    postgres 118371 118362  0 12:51 ?        00:00:00 /opt/vmware/postgres/13/bin/postgres -D /data/postgresql/node1 -p 5432 -h *
    

On the the secondary node (node 2):

  1. Login into the node:

    ssh <node2>
    
  2. Stop pg_auto_failover:

    sudo systemctl stop pgautofailover
    
  3. Update systemd with the new binary location:

    vi /etc/systemd/system/pgautofailover.service
    

    and update ExecStart:

    ExecStart = /opt/vmware/postgres/13/bin/pg_autoctl run 
    
  4. Reload systemd, and restart pg_auto_failover:

    sudo systemctl daemon-reload
    
    sudo systemctl start pgautofailover
    
  5. Update the binary path, and restart pg_auto_ctl:

    /opt/vmware/postgres/13/bin/pg_autoctl config set postgresql.pg_ctl '/opt/vmware/postgres/13/bin/pg_ctl'
    
    sudo systemctl restart pgautofailover
    
  6. Verify the operation:

    ps -ef | grep postgres | grep 5432 | grep -v grep
    

    which would have an output similar to:

    postgres 118372 118363  0 12:51 ?    00:00:00 /opt/vmware/postgres/13/bin/postgres -D /data/postgresql/node2 -p 5432 -h *
    

On the monitor node:

Ssh back to the monitor node, and confirm all the nodes have the desired status:

/opt/vmware/postgres/13/bin/pg_autoctl show state

Confirm the node 1 and node 2 versions, by running a command similar to:

ssh node1 "psql -c 'select version()'"

The output would be similar to:

                                                               version
--------------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 13.12 (VMware Postgres 13.12.0) on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit
(1 row)

Remove the cluster out of maintenance:

pg_autoctl disable maintenance
check-circle-line exclamation-circle-line close-line
Scroll to top icon