This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

Backup and Restore RabbitMQ Deployments on Kubernetes

Introduction

RabbitMQ is a highly-scalable and reliable open source message broking system. It supports a number of different messaging protocols, as well as message queueing and plugins for additional customization.

VMware Tanzu Application Catalog's (Tanzu Application Catalog) RabbitMQ Helm chart makes it easy to deploy a scalable RabbitMQ cluster on Kubernetes. This Helm chart is compliant with current best practices and can also be easily upgraded to ensure that you always have the latest fixes and security updates.

Once the RabbitMQ cluster is operational, backing up the data held within it becomes an important and ongoing administrative task. A data backup/restore strategy is required not only for data security and disaster recovery planning, but also for other tasks like off-site data analysis or application load testing.

This guide explains how to back up and restore a RabbitMQ deployment on Kubernetes using Velero, an open-source Kubernetes backup/restore tool.

Assumptions and prerequisites

This guide makes the following assumptions:

  • You have two separate Kubernetes clusters - a source cluster and a destination cluster - with kubectl and Helm v3 installed. This guide uses Google Kubernetes Engine (GKE) clusters but you can also use any other Kubernetes provider. Learn how to install kubectl and Helm v3.x.

  • You have configured Helm to use the Tanzu Application Catalog chart repository following the instructions for Tanzu Application Catalog or the instructions for VMware Tanzu Application Catalog for Tanzu Advanced.

  • You have previously deployed the Tanzu Application Catalog RabbitMQ Helm chart on the source cluster and added some data to it. Example command sequences to perform these tasks are shown below, where the PASSWORD placeholder refers to the administrator password and the cluster is deployed with a single node. Replace the REPOSITORY placeholder with a reference to your Tanzu Application Catalog chart repository.

    helm install rabbitmq REPOSITORY/rabbitmq --set auth.password=PASSWORD --set service.type=LoadBalancer --set replicaCount=1
    export SERVICE_IP=$(kubectl get svc --namespace default rabbitmq --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
    rabbitmqadmin -H $SERVICE_IP -u user -p PASSWORD declare queue name=my-queue durable=true
    rabbitmqadmin -H $SERVICE_IP -u user -p PASSWORD publish routing_key=my-queue payload="message 1"  properties="{\"delivery_mode\":2}"
    rabbitmqadmin -H $SERVICE_IP -u user -p PASSWORD publish routing_key=my-queue payload="message 2"  properties="{\"delivery_mode\":2}"
    exit
    
  • The Kubernetes provider is supported by Velero.

  • Both clusters are on the same Kubernetes provider, as this is a requirement of Velero's native support for migrating persistent volumes.

  • The restored deployment on the destination cluster will have the same name, namespace and credentials as the original deployment on the source cluster.

NOTE

The procedure outlined in this guide can only be used to back up and restore persistent messages in the source RabbitMQ cluster. Transient messages will not be backed up or restored.

NOTE

For persistent volume migration across cloud providers with Velero, you have the option of using Velero's Restic integration. This integration is not covered in this guide.

Step 1: Install Velero on the source cluster

Velero is an open source tool that makes it easy to backup and restore Kubernetes resources. It can be used to back up an entire cluster or specific resources such as persistent volumes.

  1. Modify your context to reflect the source cluster (if not already done).

  2. Follow the Velero plugin setup instructions for your cloud provider. For example, if you are using Google Cloud Platform (as this guide does), follow the GCP plugin setup instructions to create a service account and storage bucket and obtain a credentials file.

  3. Then, install Velero on the source cluster by executing the command below, remembering to replace the BUCKET-NAME placeholder with the name of your storage bucket and the SECRET-FILENAME placeholder with the path to your credentials file:

    velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.2.0 --bucket BUCKET-NAME --secret-file SECRET-FILENAME
    

    You should see output similar to the screenshot below as Velero is installed:

    Velero installation

  4. Confirm that the Velero deployment is successful by checking for a running pod using the command below:

    kubectl get pods -n velero
    

Step 2: Back up the RabbitMQ deployment on the source cluster

The next step involves using Velero to copy the persistent data volumes for the RabbitMQ pods. These copied data volumes can then be reused in a new deployment.

  1. Create a backup of the volumes in the running RabbitMQ deployment on the source cluster. This backup will contain both the primary and secondary node volumes.

    velero backup create rabbitmq-backup --include-resources=pvc,pv --selector app.kubernetes.io/instance=rabbitmq
    
  2. Execute the command below to view the contents of the backup and confirm that it contains all the required resources:

    velero backup describe rabbitmq-backup  --details
    
  3. To avoid the backup data being overwritten, switch the bucket to read-only access:

    kubectl patch backupstoragelocation default -n velero --type merge --patch '{"spec":{"accessMode":"ReadOnly"}}'
    

Step 3: Restore the RabbitMQ deployment on the destination cluster

You can now restore the persistent volumes and integrate them with a new RabbitMQ deployment on the destination cluster.

  1. Modify your context to reflect the destination cluster.

  2. Install Velero on the destination cluster as described in Step 1. Remember to use the same values for the BUCKET-NAME and SECRET-FILENAME placeholders as you did originally, so that Velero is able to access the previously-saved backups.

    velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.2.0 --bucket BUCKET-NAME --secret-file SECRET-FILENAME
    
  3. Confirm that the Velero deployment is successful by checking for a running pod using the command below:

    kubectl get pods -n velero
    
  4. Restore the persistent volumes in the same namespace as the source cluster using Velero.

    velero restore create --from-backup rabbitmq-backup
    
  5. Confirm that the persistent volumes have been restored:

    kubectl get pvc
    
  6. Create a new RabbitMQ deployment. Use the same name, namespace and cluster topology as the original deployment. Replace the PASSWORD placeholder with the same administrator password used in the original deployment and the REPOSITORY placeholder with a reference to your Tanzu Application Catalog chart repository.

    helm install rabbitmq REPOSITORY/rabbitmq --set auth.password=PASSWORD --set service.type=LoadBalancer --set replicaCount=1
    

    NOTE: If using Tanzu Application Catalog for Tanzu Advanced, install the chart following the steps described in the VMware Tanzu Application Catalog for Tanzu Advanced documentation instead.

    NOTE: The deployment command shown above is only an example. It is important to create the new deployment on the destination cluster using the same namespace, deployment name, credentials and cluster topology as the original deployment on the source cluster.

    This will create a new deployment that uses the original pod volumes (and hence the original data).

  7. Connect to the new deployment and confirm that your original queues and messages are intact using a query like the example shown below. Replace the PASSWORD placeholder with the administrator password

    export SERVICE_IP=$(kubectl get svc --namespace default rabbitmq --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
    ./rabbitmqadmin -H $SERVICE_IP -u user -p PASSWORD list queues
    ./rabbitmqadmin -H $SERVICE_IP -u user -p PASSWORD get queue=my-queue ackmode=ack_requeue_true count=10
    

    Confirm that your original data is intact.

check-circle-line exclamation-circle-line close-line
Scroll to top icon