This topic describes how to use Velero to back up and restore a stateless application using the label selector feature with Velero.
To test Velero back up and restore, use the stateless Guestbook application. This example demonstrates Velero back up and restore with label selectors, so the stateless Guestbook app is deployed using labels.
Install and configure MinIO, Velero, and restic.
Download the Guestbook app YAML files to a local known directory:
Create Guestbook namespace:
kubectl create ns guestbook
namespace/guestbook created
Deploy the Guestbook app:
kubectl apply -f . -n guestbook
deployment.apps/frontend created
service/frontend created
deployment.apps/redis-leader created
service/redis-leader created
deployment.apps/redis-follower created
service/redis-follower created
root@PKS-client-VM-WA:/DATA/K8s-Apps
Verify:
kubectl get all -n guestbook --show-labels
NAME READY STATUS RESTARTS AGE LABELS
pod/frontend-6cb7f8bd65-57lmk 1/1 Running 0 18m app=guestbook,pod-template-hash=6cb7f8bd65,tier=frontend
pod/frontend-6cb7f8bd65-92j8j 1/1 Running 0 18m app=guestbook,pod-template-hash=6cb7f8bd65,tier=frontend
pod/frontend-6cb7f8bd65-mvrqg 1/1 Running 0 18m app=guestbook,pod-template-hash=6cb7f8bd65,tier=frontend
pod/redis-leader-7b8487bf68-gmkgp 1/1 Running 0 18m app=redis,pod-template-hash=7b8487bf68,role=leader,tier=backend
pod/redis-follower-5bdcfd74c7-7jjg6 1/1 Running 0 18m app=redis,pod-template-hash=5bdcfd74c7,role=follower,tier=backend
pod/redis-follower-5bdcfd74c7-mbdcr 1/1 Running 0 18m app=redis,pod-template-hash=5bdcfd74c7,role=follower,tier=backend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
service/frontend LoadBalancer 10.100.200.64 10.199.41.15 80:30698/TCP 18m app=guestbook,tier=frontend
service/redis-leader ClusterIP 10.100.200.98 <none> 6379/TCP 18m app=redis,role=leader,tier=backend
service/redis-follower ClusterIP 10.100.200.47 <none> 6379/TCP 18m app=redis,role=follower,tier=backend
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
deployment.apps/frontend 3/3 3 3 18m <none>
deployment.apps/redis-leader 1/1 1 1 18m <none>
deployment.apps/redis-follower 2/2 2 2 18m <none>
NAME DESIRED CURRENT READY AGE LABELS
replicaset.apps/frontend-6cb7f8bd65 3 3 3 18m app=guestbook,pod-template-hash=6cb7f8bd65,tier=frontend
replicaset.apps/redis-leader-7b8487bf68 1 1 1 18m app=redis,pod-template-hash=7b8487bf68,role=leader,tier=backend
replicaset.apps/redis-follower-5bdcfd74c7 2 2 2 18m app=redis,pod-template-hash=5bdcfd74c7,role=follower,tier=backend
Access the Guestbook app at http://10.199.41.15/.
By default, there is no common label that encompasses all the Kubernetes objects in the namespace guestbook
for the application. Here you apply a common label, application=guestbook
, to these objects.
kubectl label -n guestbook pod --all application=guestbook
kubectl label -n guestbook service --all application=guestbook
kubectl label -n guestbook deployment --all application=guestbook
kubectl label -n guestbook replicaset --all application=guestbook
Verify: you should see all pods in the application with the same guestbook
label:
kubectl get all -n guestbook --show-labels
When backing up an application using Velero with the label
option, only one label can be specified in the --selector
field. This means the label must be present on all the Kubernetes objects that belong to the application.
velero backup create guestbook-backup-label --selector application=guestbook
Backup request "guestbook-backup-label" submitted successfully.
Run `velero backup describe guestbook-backup-label` or `velero backup logs guestbook-backup-label` for more details.
Verify:
velero backup describe guestbook-backup-label --details
Expected result:
Name: guestbook-backup-label
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/source-cluster-k8s-gitversion=v1.17.8+vmware.1
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=17
Phase: Completed
Errors: 0
Warnings: 0
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: application=guestbook
Storage Location: default
Velero-Native Snapshot PVs: auto
TTL: 720h0m0s
Hooks: <none>
Backup Format Version: 1
Started: 2020-07-22 14:46:41 -0700 PDT
Completed: 2020-07-22 14:46:43 -0700 PDT
Expiration: 2020-08-21 14:46:41 -0700 PDT
Total items to be backed up: 18
Items backed up: 18
Resource List:
apps/v1/Deployment:
- guestbook/frontend
- guestbook/redis-leader
- guestbook/redis-follower
apps/v1/ReplicaSet:
- guestbook/frontend-6cb7f8bd65
- guestbook/redis-leader-7b8487bf68
- guestbook/redis-follower-5bdcfd74c7
v1/Endpoints:
- guestbook/frontend
- guestbook/redis-leader
- guestbook/redis-follower
v1/Pod:
- guestbook/frontend-6cb7f8bd65-57lmk
- guestbook/frontend-6cb7f8bd65-92j8j
- guestbook/frontend-6cb7f8bd65-mvrqg
- guestbook/redis-leader-7b8487bf68-gmkgp
- guestbook/redis-follower-5bdcfd74c7-7jjg6
- guestbook/redis-follower-5bdcfd74c7-mbdcr
v1/Service:
- guestbook/frontend
- guestbook/redis-leader
- guestbook/redis-follower
Velero-Native Snapshots: <none included>
Verify the backup that was created.
velero backup get
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
guestbook-backup Completed 0 0 2020-07-21 14:45:51 -0700 PDT 29d default <none>
Also, check the tkgi-velero
bucket on the MinIO server.
Velero writes some metadata in Kubernetes CustomResourceDefinition (CRD).
kubectl get crd
The Velero CRDs let you run certain commands, such as the following:
kubectl get backups.velero.io -n velero
NAME AGE
guestbook-backup 24h
guestbook-backup-label 2m16s
kubectl describe backups.velero.io guestbook-backup-label -n velero
Name: guestbook-backup-label
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/source-cluster-k8s-gitversion: v1.17.8+vmware.1
velero.io/source-cluster-k8s-major-version: 1
velero.io/source-cluster-k8s-minor-version: 17
API Version: velero.io/v1
Kind: Backup
Metadata:
Creation Timestamp: 2020-07-22T21:46:41Z
Generation: 5
Resource Version: 3597347
Self Link: /apis/velero.io/v1/namespaces/velero/backups/guestbook-backup-label
UID: 0a88adde-1c1a-4988-828f-886aa14fda17
Spec:
Hooks:
Included Namespaces:
*
Label Selector:
Match Labels:
Application: guestbook
Storage Location: default
Ttl: 720h0m0s
Status:
Completion Timestamp: 2020-07-22T21:46:43Z
Expiration: 2020-08-21T21:46:41Z
Format Version: 1.1.0
Phase: Completed
Progress:
Items Backed Up: 18
Total Items: 18
Start Timestamp: 2020-07-22T21:46:41Z
Version: 1
Events: <none>
To test the restoration of the Guestbook app, delete the namespace:
kubectl delete ns guestbook
namespace "guestbook" deleted
Restore the Guestbook app:
velero restore create --from-backup guestbook-backup-label
Restore request "guestbook-backup-label-20200722145118" submitted successfully.
Run `velero restore describe guestbook-backup-label-20200722145118` or `velero restore logs guestbook-backup-label-20200722145118` for more details.
Verify that the app is restored:
velero restore describe guestbook-backup-label-20200722145118
Name: guestbook-backup-label-20200722145118
Namespace: velero
Labels: <none>
Annotations: <none>
Phase: Completed
Backup: guestbook-backup-label
Namespaces:
Included: all namespaces found in the backup
Excluded: <none>
Resources:
Included: *
Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: auto
Run the following commands to verify:
velero restore get
NAME BACKUP STATUS ERRORS WARNINGS CREATED SELECTOR
guestbook-backup-20200721145620 guestbook-backup Completed 0 0 2020-07-21 14:56:20 -0700 PDT <none>
guestbook-backup-label-20200722145118 guestbook-backup-label Completed 0 0 2020-07-22 14:51:18 -0700 PDT <none>
kubectl get ns
NAME STATUS AGE
default Active 13d
guestbook Active 19s
kube-node-lease Active 13d
kube-public Active 13d
kube-system Active 13d
pks-system Active 13d
velero Active 50m
kubectl get all -n guestbook --show-labels
Access the Guestbook app at http://10.199.41.14/.
Note the following:
guestbook
is automatically re-created.10.199.41.10
to .14
.