This topic describes how to back up and restore Tanzu Postgres.

Overview

Tanzu Postgres allows you to backup instances on-demand, schedule automated backups, restore in-place, and restore from a backup to new Postgres instances.

The supported locations for uploading and retrieving backup artifacts are Amazon S3, or other S3-compatible data stores like Minio.

Tanzu Postgres backup and restore uses four Custom Resource Definitions (CRDs):

  • PostgresBackup: References a Postgres backup artifact that exists in an external blobstore such as S3 or Minio. Every time you generate an on-demand or scheduled backup, Tanzu Postgres creates a new PostgresBackup resource.

  • PostgresBackupLocation: References an external blobstore and the necessary credentials for blobstore access.

  • PostgresBackupSchedule: Represents a CronJob schedule specifying when to perform backups.

  • PostgresRestore: References a Postgres restore artifact that receives a PostgresBackup resource and restores the data from the backup to a new Postgres instance or to the same postgres instance (an in-place restore).

For detailed information about the CRDs, see Property Reference for Backup and Restore.

Prerequisites

Before creating a Tanzu Postgres backup you need:

  • the kubectl command line tool installed on your local client, with access permissions to the Kubernetes cluster.
  • access permissions to a preconfigured S3 bucket where the pgdata persistent volume (PV) backups will be stored.
  • the access credentials that will populate the accessKeyId and secretAccessKeyof the S3 backup secret.
  • the instance namespace, if the Postgres instance is already created. Use kubectl get namespaces for a list of available namespaces.
  • (optional) a pre-agreed backup schedule to be used to configure scheduled backups.

Backing Up Tanzu Postgres

Create on-demand or scheduled backups by configuring the PostgresBackupLocation CRD, which specifies the details of the location and access credentials to the external S3 blobstore.

Configure the Backup Location

To take a backup to an external S3 location, create a PostgresBackupLocation resource:

  1. Locate the backuplocation.yaml deployment yaml in the ./samples directory of your downloaded release, and create a copy with a unique name. For example:

    $ cp ~/Downloads/postgres-for-kubernetes-v1.3.0/samples/backuplocation.yaml testbackuplocation.yaml
    
  2. Edit the file using the configuration details of your external S3 bucket. The same file contains the properties of your backup credentials secret. For example:

    ---
    apiVersion: sql.tanzu.vmware.com/v1
    kind: PostgresBackupLocation
    metadata:
      name: backuplocation-sample
    spec:
      storage:
        s3:
          bucket: "name-of-bucket"
          bucketPath: "/my-bucket-path"
          region: "us-east-1"
          endpoint: "custom-endpoint"
          forcePathStyle: false
          enableSSL: true
          secret:
            name: backuplocation-creds-sample
    
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: backuplocation-creds-sample
    type: generic
    stringData:
      # Credentials
      accessKeyId: "my-access-key-id"
      secretAccessKey: "my-secret-access-key"
    

    For details on the various properties see PostgresBackupLocation Resource Properties and Secret Properties.

  3. Create the PostgresBackupLocation resource in the Postgres instance namespace:

    $ kubectl apply -f FILENAME -n DEVELOPMENT-NAMESPACE
    

    where:

    • FILENAME is the name of the configuration file you created in Step 2 above.
    • DEVELOPMENT-NAMESPACE is the namespace for the Postgres instance you intend to backup.

    For example:

    $ kubectl apply -f testbackuplocation.yaml -n my-namespace
    
    postgresbackuplocation.sql.tanzu.vmware.com/backuplocation-sample created
    secret/backuplocation-creds-sample configured
    
  4. View the created PostgresBackupLocation by running:

    $ kubectl get postgresbackuplocation <BACKUP-LOCATION-NAME> \
    -o jsonpath={.spec} -n DEVELOPMENT-NAMESPACE
    

    For example:

    $ kubectl get postgresbackuplocation backuplocation-sample -o jsonpath={.spec} -n my-namespace
    

    which returns similar to:

    {
        "storage":{
            "s3":{
                "bucket":"name-of-bucket",
                "bucketPath":"/my-bucket-path",
                "enableSSL":true,
                "endpoint":"custom-endpoint",
                "forcePathStyle":false,
                "region":"us-east-1",
                "secret":{
                    "name":"backuplocation-creds-sample"
                }
            }
        }
    }
    
  5. Update the Postgres instance manifest with the PostgresBackupLocation field. Go to the location where you have stored the Tanzu Postgres instance manifest file. For example:

    $ cd ./postgres-for-kubernetes-v<version>
    
  6. Edit the manifest yaml file you used to deploy the instance; in this example the file is called postgres.yaml. Provide a value for the backupLocation attribute.

    For example:

    apiVersion: sql.tanzu.vmware.com/v1
    kind: Postgres
    metadata:
      name: postgres-sample
    spec:
      storageClassName: standard
      storageSize: 800M
      cpu: "0.8"
      memory: 800Mi
      monitorStorageClassName: standard
      monitorStorageSize: 1G
      pgConfig:
         dbname: postgres-sample
         username: pgadmin
      serviceType: ClusterIP
      highAvailability:
         enabled: false
      backupLocation:
         name: backuplocation-sample
    
  7. Execute the kubectl apply command, specifying the manifest file you edited. For example:

    $ kubectl apply -f ./postgres.yaml --wait=false
    
    postgres.sql.tanzu.vmware.com "postgres-sample" configured
    

    If the manifest file contains any incorrectly formatted values or unrecognized field names, an error message is displayed identifying the issue. Edit the manifest to correct the error and run the command again.

  8. Verify the updated configuration by viewing the backupLocation fields of the instance object:

    $ kubectl get postgres/postgres-sample -o jsonpath='{.spec.backupLocation}'
    
    {"name":"backuplocation-sample"}
    

Perform an On-Demand Backup

To take a backup:

  1. Locate the backup.yaml deployment template located in the ./samples directory of the downloaded release file.

  2. Create a copy of the backup.yaml file and give it a unique name. For example:

    $ cp ~/Downloads/postgres-for-kubernetes-v1.3.0/backup.yaml testbackup.yaml
    
  3. Edit the file according to your environment. For details on the properties of the PostgresBackup resource, see Properties for the PostgresBackup Resource.

  4. Trigger the backup by creating the PostgresBackup resource in the instance namespace, by running:

    $ kubectl apply -f FILENAME -n DEVELOPMENT-NAMESPACE
    

    where FILENAME is the name of the configuration file you created in Step 3 above.

    For example:

    $ kubectl apply -f testbackup.yaml -n my-namespace
    
    postgresbackup.sql.tanzu.vmware.com/backup-sample created
    
  5. Verify that the backup has been generated, and track its progress by using:

    $ kubectl get postgresbackup backup-sample -n DEVELOPMENT-NAMESPACE
    

    For example:

    $ kubectl get postgresbackup backup-sample -n my-namespace
    
    NAME            STATUS      SOURCE INSTANCE     TYPE   TIME STARTED           TIME COMPLETED
    backup-sample   Succeeded   postgres-sample     full   2021-08-31T14:29:14Z   2021-08-31T14:29:14Z
    

    For further details on the above output, see List Existing PostgresBackup Resources below.

Create Scheduled Backups

To create scheduled backups, create a PostgresBackupSchedule resource:

  1. Locate the backupschedule.yaml template in the ./samples directory of the release download, and copy to a new file. For example:

    $ cp ~/Downloads/postgres-for-kubernetes-v1.3.0/samples/backupschedule.yaml testbackupschedule.yaml
    
  2. Edit the file with the name of the Postgres instance you want to backup. For example:

    apiVersion: sql.tanzu.vmware.com/v1
    kind: PostgresBackupSchedule
    metadata:
      name: backupschedule-sample
    spec:
      backupTemplate:
        spec:
           sourceInstance:
              name: postgres-sample
           type: full
      schedule: "0 0 * * SAT"
    

    where:

    • postgres-sample is the instance you're planning to backup.
    • type is full.
    • schedule is a cron job schedule; in this example it is planned for every Saturday at 00:00:00.

    For an explanation of the PostgresBackupSchedule properties, see Properties for the PostgresBackupSchedule Resource.

  3. Create the PostgresBackupSchedule resource in the same namespace of the Postgres instance that you referenced in the PostgresBackupSchedule manifest file.

    $ kubectl apply -f FILENAME -n DEVELOPMENT-NAMESPACE`
    

    where:

    • FILENAME is the name of the configuration file you created in Step 1.
    • DEVELOPMENT-NAMESPACE is the namespace for the Postgres instance you intend to backup.

    For example:

    $ kubectl apply -f testbackupschedule.yaml -n my-namespace
    
    postgresbackupschedule.sql.tanzu.vmware.com/backupschedule-sample created
    
  4. Verify that the PostgresBackupSchedule has been created by running:

    $ kubectl get postgresbackupschedule backupschedule-sample -o jsonpath={.spec} -n DEVELOPMENT-NAMESPACE
    

    For example:

    $ kubectl get postgresbackupschedule backupschedule-sample -o jsonpath={.spec} -n my-namespace
    
    {
        "backupTemplate": {
        "spec": {
        "sourceInstance": {
          "name": "postgres-sample"
        },
        "type": "full"
        }
        },
        "schedule": "@daily"
    }
    

    After configuring the PostgresBackupLocation resource and the PostgresBackupSchedule resource for an existing Postgres instance, backups will be generated and uploaded to the external blobstore at the scheduled time.

    The PostgresBackupSchedule generates PostgresBackup resources that have a name format like: SCHEDULE-NAME-TIMESTAMP. For example, if the PostgresBackup resource on the Kubernetes cluster is named pgbackupschedule-sample, and a backup was taken on Thursday, December 10, 2020 at 8:51:03 PM GMT, the PostgresBackup resource name is pgbackupschedule-sample-20201210-205103.

Backup Schedule Status

Check the status.message field in PostgresBackupSchedule CR to understand any issues with PostgresBackupSchedule spec, backups scheduling, invalid cron schedule syntax, or errors like the backuplocation.name not configured in the sourceInstance.

For example:

$ kubectl get postgresbackupschedule backupschedule-sample -o jsonpath={.status.message}

could return an output similar to:

Instance my-postgres-1 does not exist in the default namespace

if the user inserted the wrong Postgres instance name, my-postgres-1 instead of postgres-sample, in PostgresBackupSchedule's spec.backupTemplate.spec.sourceInstance.name.

If the backup was successfully scheduled but the backup itself failed, then to troubleshoot, see Troubleshoot Backup and Restore.

Listing Backup Resources

You might want to list existing PostgresBackup resources for various reasons, for example:

  • To select a backup to restore. For steps to restore a backup, see Restoring Tanzu Postgres.
  • To see the last successful backup.
  • To verify that scheduled backups are running as expected.
  • To find old backups that need to be cleaned up. For steps to delete backups, see Deleting Old Backups.

List existing PostgresBackup resources by running:

$ kubectl get postgresbackup
NAME            STATUS      SOURCE INSTANCE   TYPE   TIME STARTED           TIME COMPLETED
backup-sample   Succeeded   postgres-sample   full   2021-08-31T14:29:14Z   2021-08-31T14:29:14Z

Where:

  • STATUS Represents the current status of the backup. Allowed values are:
    • Pending: The backup has been received but not scheduled on a Postgres Pod.
    • Running: The backup is being generated and streamed to the external blobstore.
    • Succeeded: The backup has completed successfully.
    • Failed: The backup has failed to complete. To troubleshoot a failed backup, see Troubleshoot Backup and Restore.
  • SOURCE INSTANCE is the Postgres instance the backup was taken from.
  • TYPE is the type of Postgres backup that was executed.
  • TIME STARTED is the time that the backup process started.
  • TIME COMPLETED is the time that the backup process finished. If the backup fails, this value is empty.

Note: Users with version 1.2.0 backups can still list the previous backup information, along with the new 1.3.0 backups. Use the pgbackrest command directly on the primary pod to review all existing backups, independent of version. For example, login into the pod and run:

postgres@postgres-sample-0:/$ pgbackrest info --stanza=${BACKUP_STANZA_NAME}

If the BACKUP_STANZA_NAME is default-postgres-sample, the output would be similar to:

stanza: default-postgres-sample
    status: ok
    cipher: aes-256-cbc

    db (current)
        wal archive min/max (11): 000000010000000000000004/000000010000000000000009

        full backup: 20210915-140558F
            timestamp start/stop: 2021-09-15 14:05:58 / 2021-09-15 14:06:04
            wal start/stop: 000000010000000000000004 / 000000010000000000000004
            database size: 31.0MB, database backup size: 31.0MB
            repo1: backup set size: 3.7MB, backup size: 3.7MB

        full backup: 20210916-143321F
            timestamp start/stop: 2021-09-16 14:33:21 / 2021-09-16 14:33:41
            wal start/stop: 000000010000000000000009 / 000000010000000000000009
            database size: 31MB, database backup size: 31MB
            repo1: backup set size: 3.7MB, backup size: 3.7MB

To list backups related to a specific Postgres instance in the cluster, use:

$ kubectl get postgresbackups -l postgres-instance=postgres-sample

with output similar to:

NAME              STATUS      SOURCE INSTANCE   TYPE           TIME STARTED           TIME COMPLETED
backup-sample     Succeeded   postgres-sample   full           2021-10-05T21:17:34Z   2021-10-05T21:17:41Z
backup-sample-1   Succeeded   postgres-sample   full           2021-10-05T21:28:46Z   2021-10-05T21:28:54Z
backup-sample-2   Succeeded   postgres-sample   full           2021-10-05T21:29:44Z   2021-10-05T21:29:51Z
backup-sample-3   Succeeded   postgres-sample   differential   2021-10-05T21:36:43Z   2021-10-05T21:36:49Z
backup-sample-4   Succeeded   postgres-sample   differential   2021-10-05T21:37:20Z   2021-10-05T21:37:26Z
backup-sample-5   Succeeded   postgres-sample   differential   2021-10-05T21:37:39Z   2021-10-05T21:37:45Z
backup-sample-6   Succeeded   postgres-sample   full           2021-10-05T21:43:35Z   2021-10-05T21:43:42Z
backup-sample-7   Succeeded   postgres-sample   full           2021-10-05T21:49:33Z   2021-10-05T21:49:41Z
backup-sample-8   Succeeded   postgres-sample   full           2021-10-05T22:07:43Z   2021-10-05T22:07:50Z

Deleting Old Backups

Tanzu Postgres for Kubernetes does not natively support retention policies for backup artifacts. To create backup retention plans, configure retention policies on your external blobstore, and ensure you also delete the associated PostgresBackup resources in the Kubernetes cluster. Tanzu Postgres does not automatically delete backups to match your S3 retention plans.

To delete a backup:

  1. Delete the backup in the external blobstore.

  2. On your Kubernetes cluster, delete the PostgresBackup resource by running:

    $ kubectl delete postgresbackup BACKUP-NAME -n DEVELOPMENT-NAMESPACE
    

    For example:

    $ kubectl delete postgresbackup backup-sample -n my-namespace
    

Restoring Tanzu Postgres

Tanzu Postgres allows you to perform two types of data restores:

Restore In-place

In this scenario, you can use a previous backup to override data in an existing instance.

Prerequisites

Before you restore from a backup, you must have:

  • An existing PostgresBackup in your current namespace. To list the existing PostgresBackup resources, see Listing Backup Resources. You can restore from any kind of backup (full, differential, incremental) provided it follows the pgbackrest guidelines.
  • A PostgresBackupLocation that represents the bucket where the existing backup artifact is stored. See Configure the Backup Location above.

Procedure

To restore from a full backup:

  1. Locate the restore.yaml deployment yaml in the ./samples directory of your downloaded release, and create a copy with a unique name. For example:

    $ cp ~/Downloads/postgres-for-kubernetes-v1.3.0/samples/restore.yaml testrestore.yaml
    
  2. Locate all backups of your existing instance. For example, for all the backups executed against a Postgres instance called 'postgres-sample' use:

    $ kubectl get postgresbackups -n NAMESPACE -l postgres-instance=postgres-sample
    
    NAME              STATUS      SOURCE INSTANCE   TYPE   TIME STARTED           TIME COMPLETED
    backup-sample-4   Succeeded   postgres-sample   full   2021-09-24T19:40:17Z   2021-09-24T19:40:24Z
    backup-sample-5   Succeeded   postgres-sample   full   2021-09-25T16:54:55Z   2021-09-25T16:55:02Z
    backup-sample-6   Succeeded   postgres-sample   full   2021-09-24T19:48:41Z   2021-09-24T19:48:48Z
    backup-sample-7   Succeeded   postgres-sample   full   2021-09-27T23:04:06Z   2021-09-27T23:04:13Z
    backup-sample-8   Succeeded   postgres-sample   full   2021-09-27T23:19:51Z   2021-09-27T23:19:58Z
    
  3. Locate the backup you'll like to restore, and edit the 'restore.yaml' with your information. For information about the PostgresRestore resource properties, see Property Reference for Backup and Restore.

      apiVersion: sql.tanzu.vmware.com/v1
      kind: PostgresRestore
      metadata:
        name: restore-sample
      spec:
        sourceBackup:
          name: backup-sample
        targetInstance:
          name: postgres-sample
    
  4. For in-place restore, ensure that the sourceBackup was performed on the targetInstance. Refer to step 2 for validation.

  5. Trigger the restore by creating the PostgresRestore resource in the same namespace as the PostgresBackup and PostgresBackupLocation. Run:

    $ kubectl apply -f FILENAME -n DEVELOPMENT-NAMESPACE
    

    where FILENAME is the name of the configuration file you created in Step 2 above.

    For example:

    $ kubectl apply -f testrestore.yaml -n my-namespace
    
    postgresrestores.sql.tanzu.vmware.com/restore-sample created
    
  6. Verify that a restore has been triggered and track the progress of your restore by running:

    $ kubectl get postgresrestore restore-sample -n DEVELOPMENT-NAMESPACE
    

    For example:

    $ kubectl get postgresrestore restore-sample -n my-namespace
    
    NAME             STATUS      SOURCE BACKUP     TARGET INSTANCE   TIME STARTED           TIME COMPLETED
    restore-sample   Succeeded   backup-sample     postgres-sample   2021-09-27T23:34:13Z   2021-09-27T23:34:26Z
    

    To understand the output, see the table below:

    Column Name Meaning
    STATUS Represents the current status of the restore process.
    Allowed values are:
    • Running: The restore is in progress.
    • RecreateNodes: The postgres nodes are being restarted as part of the restore workflow.
    • RecreatePrimary: In case of HA target instance, the primary pod is being restarted
    • WaitForPrimary: Wait for the primary pod to be up and running
    • RecreateSecondary: In case of HA target instance, the secondary pod is being restarted
    • Finalizing: The restore is nearly complete, waiting for the restart to be done and target instance to be up.
    • Succeeded: The restore has completed successfully.
    • Failed: The restore failed. To troubleshoot, see Troubleshooting Backup and Restore.
    SOURCE BACKUP The name of the backup being restored.
    TARGET INSTANCE The name of the source postgres instance to be restored with the backup contents.
    TIME STARTED The time that the restore process started.
    TIME COMPLETED The time that the restore process finished.

Restore to a Different Instance

Prerequisites

  • Ensure the new targeted instance exists in the same namespace as the Postgres instance you're restoring from.

Procedure

  1. Create a new Postgres instance, if your target instance doesn't already exist.

  2. Locate the restore.yaml deployment yaml in the ./samples directory of your downloaded release, and create a copy with a unique name. For example:

    $ cp ~/Downloads/postgres-for-kubernetes-v1.3.0/samples/restore.yaml testrestore.yaml
    
  3. Locate all the backups for the existing instance. For example, to list all backups executed against a Postgres instance called 'postgres-sample' use a command similar to:

    $ kubectl get postgresbackups -n NAMESPACE -l postgres-instance=postgres-sample
    
    NAME              STATUS      SOURCE INSTANCE   TYPE   TIME STARTED           TIME COMPLETED
    backup-sample-4   Succeeded   postgres-sample   full   2021-09-24T19:40:17Z   2021-09-24T19:40:24Z
    backup-sample-5   Succeeded   postgres-sample   full   2021-09-25T16:54:55Z   2021-09-25T16:55:02Z
    backup-sample-6   Succeeded   postgres-sample   full   2021-09-24T19:48:41Z   2021-09-24T19:48:48Z
    backup-sample-7   Succeeded   postgres-sample   full   2021-09-27T23:04:06Z   2021-09-27T23:04:13Z
    backup-sample-8   Succeeded   postgres-sample   full   2021-09-27T23:19:51Z   2021-09-27T23:19:58Z
    
  4. Once you've located the backup you'll like to restore to, edit the 'restore.yaml' For information about the properties that you can set for the PostgresRestore resource, see Property Reference for Backup and Restore.

      apiVersion: sql.tanzu.vmware.com/v1
      kind: PostgresRestore
      metadata:
        name: restore-sample
      spec:
        sourceBackup:
          name: backup-sample
        targetInstance:
          name: postgres-sample
    
  5. For restore to a new instance, make sure that the sourceBackup was NOT performed on the target instance, refer to step 3 for validation

  6. Trigger the restore by creating the PostgresRestore resource in the same namespace as the PostgresBackup and PostgresBackupLocation by running:

    $ kubectl apply -f FILENAME -n DEVELOPMENT-NAMESPACE
    

    Where FILENAME is the name of the configuration file you created in Step 2 above.

    For example:

    $ kubectl apply -f testrestore.yaml -n my-namespace
    
    postgresrestores.sql.tanzu.vmware.com/restore-sample created
    
  7. Verify that a restore has been triggered and track the progress of your restore by running:

    $ kubectl get postgresrestore restore-sample -n DEVELOPMENT-NAMESPACE
    

    For example:

    $ kubectl get postgresrestore restore-sample -n my-namespace
    NAME             STATUS      SOURCE BACKUP     TARGET INSTANCE   TIME STARTED           TIME COMPLETED
    restore-sample   Succeeded   backup-sample     postgres-sample   2021-09-27T23:34:13Z   2021-09-27T23:34:26Z
    
  8. To understand the output, see the table below:

    Column Name Meaning
    STATUS Represents the current status of the restore process.
    Allowed values are:
    • Running: The restore is in progress.
    • RecreateNodes: The postgres nodes are being restarted as part of the restore workflow.
    • RecreatePrimary: In case of HA target instance, the primary pod is being restarted
    • WaitForPrimary: Wait for the primary pod to be up and running
    • RecreateSecondary: In case of HA target instance, the secondary pod is being restarted
    • Finalizing: The restore is nearly complete, waiting for the restart to be done and target instance to be up.
    • Succeeded: The restore has completed successfully.
    • Failed: The restore failed. To troubleshoot, see Troubleshoot Backup and Restore below.
    SOURCE BACKUP The name of the backup being restored.
    TARGET INSTANCE The name of the new Postgres instance to be restored with the backup contents.
    TIME STARTED The time that the restore process started.
    TIME COMPLETED The time that the restore process finished.

Validating a Successful Restore

Validate that the Postgres Instance has a status of Running

$ kubectl get postgres.sql.tanzu.vmware.com/postgres-sample
NAME             STATUS    BACKUP LOCATION         AGE
postgres-sample  Running   backuplocation-sample   43h

Migrating to Tanzu Postgres 1.3.0 Backup and Restore

This topic applies to customers that already use Tanzu Postgres 1.2.0 and earlier.

Tanzu Postgres 1.3.0 deprecates the backupLocationSecret field on the Postgres instance spec which was used in versions 1.2.0 and earlier. Version 1.3.0 introduces three new backup and restore CRDs. Users can still view and restore from earlier backups, but Tanzu Postgres 1.3.0 requires migration to the new backup strategy. Perform the following steps to migrate:

  1. Confirm that the existing Postgres instance references a backupLocationSecret:

    $ kubectl get postgres.sql.tanzu.vmware.com/postgres-sample -o jsonpath='{.spec.backupLocationSecret.name}'
    
    my-postgres-s3-secret
    

    where my-postgres-s3-secret is an example of a secret referenced in the postgres-sample manifest file.

  2. Remove the backupLocationSecret from the instance spec:

    $ kubectl patch postgres.sql.tanzu.vmware.com/postgres-sample --type='json' -p='[{"op": "remove", "path":"/spec/backupLocationSecret"}]'
    
    postgres.sql.tanzu.vmware.com/postgres-sample patched
    
  3. Wait until all the changes have been successfully applied

    $ kubectl rollout status statefulset.apps/postgres-sample
    
    partitioned roll out complete: 1 new pods have been updated...
    
  4. Upgrade the Tanzu Postgres Operator. For details see Upgrading the Tanzu Postgres Operator and Instances.

  5. Create a new backup resource and reference it in the instance spec. See Configure the Backup Location.

  6. Confirm that all the instances previously using backupLocationSecret have been updated to use the PostgresBackupLocation CR.

  7. Delete the secret that is no longer necessary.

  8. Perform an on demand backup or create a schedule for scheduled backups. See Perform an On-Demand Backup, and Create Scheduled Backups.

Troubleshooting Backup and Restore

To troubleshoot any issues, review the resource status, and read any messages associated with the resource events. Monitor the STATUS column of any Postgres custom resource using kubectl get postgresbackup, to confirm if status is Failed, or is stuck in Pending, Scheduled, or Running. Then try to investigate:

  • Misconfiguration issues
  • Problems with the external blobstore
  • Issues with the Postgres Operator

Investigating a FAILED status due to missing pgbackrest.conf file

In this example, the kubectl get command outputs a Failed status:

$ kubectl get postgresbackup
NAME            STATUS   SOURCE INSTANCE   TYPE   TIME STARTED           TIME COMPLETED
backup-sample   Failed   postgres-sample   full   2021-08-31T14:29:14Z   2021-08-31T14:29:14Z

Diagnose the issue by inspecting the Kubernetes events for the resource. For example:

$ kubectl get events --field-selector involvedObject.name=backup-sample
LAST SEEN   TYPE      REASON   OBJECT                         MESSAGE
5s          Warning   Failed   postgresbackup/backup-sample   ERROR: [055]: unable to open missing file '/etc/pgbackrest/pgbackrest.conf' for read

Read the message in the MESSAGE column to understand why the failure occurred.

In the example above, the backup-sample expected a file called /etc/pgbackrest/pgbackrest.conf to exist. Fix this problem by creating and attaching a PostgresBackupLocation CR to the Postgres instance.

Investigating a FAILED PostgresRestore

In this example, the kubectl get command outputs a Failed status:

$ kubectl get postgresrestore
NAME             STATUS      SOURCE BACKUP          TARGET INSTANCE   TIME STARTED           TIME COMPLETED
restore-test     Failed      sample-source-backup   postgres-sample   2021-09-29T19:00:26Z   2021-09-29T19:00:26Z

Diagnose the issue by inspecting the Kubernetes events for the resource. For example:

$ kubectl get events --field-selector involvedObject.name=restore-test
LAST SEEN   TYPE      REASON                         OBJECT                         MESSAGE
3m21s       Warning   ReferencedBackupDoesNotExist   postgresrestore/restore-test   PostgresBackup.sql.tanzu.vmware.com "sample-source-backup" not found

In the example above, the restore failed because the backup specified in the sourceInstance field does not exist.

check-circle-line exclamation-circle-line close-line
Scroll to top icon