This topic describes how to install Velero for backing up and restoring Tanzu Kubernetes Grid Integrated Edition (TKGI)-provisioned Kubernetes workloads. This topic also describes how to install MinIO for Velero.
Ensure the following before installing Velero for backing up and restoring TKGI:
To deploy and configure a MinIO Server on a Linux Ubuntu VM as the Velero backend object store:
For more information about MinIO, see the MinIO Quick Start Guide.
To install MinIO:
Install the MinIO app:
wget https://dl.min.io/server/minio/release/linux-amd64/minio
Grant execute permissions to the MinIO app:
chmod +x minio
Create a directory where MinIO data will be stored:
mkdir /DATA-MINIO
To prepare the MinIO server:
Start the MinIO server:
./minio server /DATA-MINIO
After the MinIO server has started, you are provided with the datastore instance endpoint URL, AccessKey, and SecretKey.
Record the MinIO server endpoint URL, AccessKey, and SecretKey information for the datastore instance.
To enable MinIO as a service, configure MinIO for automatic startup:
Download the minio.service
script:
curl -O https://raw.githubusercontent.com/minio/minio-service/master/linux-systemd/minio.service
Edit the minio.service
script and add the following value for ExecStart
:
ExecStart=/usr/local/bin/minio server /DATA-MINIO path
Save the revised script.
Configure the MinIO service by running the following commands:
cp minio.service /etc/systemd/system
cp minio /usr/local/bin/
systemctl daemon-reload
systemctl start minio
systemctl status minio
systemctl enable minio
To create a MinIO bucket for TKGI workload back up and restore:
Browse to the MinIO datastore by opening a browser to the MinIO server endpoint URL recorded from the minio server
output. For example: http://10.199.17.63:9000/minio/login/.
Log in to the MinIO server and provide the AccessKey and SecretKey. These are the username and password as described in User Management in the MinIO documentation.
View a larger version of this image.
tkgi-velero
.To install the Velero CLI on your workstation:
To download the Velero CLI Binary:
Download the supported version of the signed Velero binary for your version of TKGI from the TKGI product downloads page at Broadcom Support. For more information about the currently supported Velero versions, see the Product Snapshot section of the Release Notes.
Note: You must use the Velero binary signed by VMware to be eligible for support from VMware.
To install the Velero CLI on the TKGI client or on your local machine:
Unzip the download file:
gunzip velero-linux-v1.12.1+vmware.1.gz
To check for the Velero binary:
ls -l
For example:
$ ls -l
-rwxrwxr-x 1 kubo kubo 69985692 Nov 14 02:55 velero-linux-v1.12.1+vmware.1
Grant execute permissions to the Velero CLI:
chmod +x velero-linux-v1.12.1+vmware.1
Make the Velero CLI globally available by moving it to the system path:
cp velero-linux-v1.12.1+vmware.1 /usr/local/bin/velero
Verify the installation:
velero version
For example:
$ velero version
Client:
Version: v1.12.1
To install the Velero pods on each Kubernetes cluster whose workloads you intend to back up, complete the following:
The following steps require that:
The Velero CLI context will automatically follow the kubectl context. Before running Velero CLI commands to install Velero on the target cluster, set the kubectl context:
tkgi-velero
.0XXNO8JCCGV41QZBV0RQ
and SecretKey: clZ1bf8Ljkvkmq7fHucrKCkxV39BRbcycGeXQDfx
.kubectl
works against the cluster. If needed, use tkgi get-credentials
.Set the context for the target Kubernetes cluster so that the Velero CLI knows which cluster to work on by running:
tkgi get-credentials CLUSTER-NAME
Where CLUSTER-NAME
is the name of the cluster. For example:
$ tkgi get-credentials cluster-1
Fetching credentials for cluster cluster-1.
Password: ********
Context set for cluster cluster-1.
You can now switch between clusters by using:
$kubectl config use-context <cluster-name>
You can also run kubectl config use-context CLUSTER-NAME
to set context.
To create a secrets file, create a file named credentials-minio
. Update the file with the MinIO server access credentials that you collected above:
[default]
aws_access_key_id = ACCESS-KEY
aws_secret_access_key = SECRET-KEY
Where:
ACCESS-KEY
is the AccessKey that you collected above.SECRET-KEY
is the SecretKey that you collected above.For example:
[default]
aws_access_key_id = 0XXNO8JCCGV41QZBV0RQ
aws_secret_access_key = clZ1bf8Ljkvkmq7fHucrKCkxV39BRbcycGeXQDfx
Save the file.
Verify that the file is in place:
ls
For example:
$ ls
credentials-minio
To install Velero:
Run the following command to install Velero on the target Kubernetes cluster:
velero install --image projects.registry.vmware.com/tkg/velero/velero:v1.12.1_vmware.1
--provider aws \
--plugins projects.registry.vmware.com/tkg/velero/velero-plugin-for-aws:v1.7.1_vmware.1 \
--bucket tkgi-velero \
--secret-file ./credentials-minio \
--use-volume-snapshots=false \
--default-volumes-to-fs-backup \
--use-node-agent \
--backup-location-config \
region=minio,s3ForcePathStyle="true",s3Url=http://10.199.17.63:9000,publicUrl=http://10.199.17.63:9000
For example:
$ velero install --image projects.registry.vmware.com/tkg/velero/velero:v1.12.1_vmware.1 --provider aws --plugins projects.registry.vmware.com/tkg/velero/velero-plugin-for-aws-v1.7.1_vmware.1 \
--bucket tkgi-velero --secret-file ./credentials-minio --use-volume-snapshots=false \
--default-volumes-to-fs-backup \
--use-node-agent \
--backup-location-config \
region=minio,s3ForcePathStyle="true",s3Url=http://10.199.17.63:9000,publicUrl=http://10.199.17.63:9000
CustomResourceDefinition/backups.velero.io: created
...
Waiting for resources to be ready in cluster...
...
DaemonSet/node-agent: created
Velero is installed! Use 'kubectl logs deployment/velero -n velero' to view the status.
Verify the installation of Velero:
kubectl logs deployment/velero -n velero
Verify the velero
namespace:
kubectl get ns
For example:
$ kubectl get ns
NAME STATUS AGE
default Active 13d
kube-node-lease Active 13d
kube-public Active 13d
kube-system Active 13d
pks-system Active 13d
velero Active 2m38s
Verify the velero
and node-agent
pods.
kubectl get all -n velero
For example:
$ kubectl get all -n velero
NAME READY STATUS RESTARTS AGE
pod/node-agent-96zjb 0/1 CrashLoopBackOff 4 (21s ago) 2m5s
pod/node-agent-9r7tn 0/1 CrashLoopBackOff 4 (29s ago) 2m5s
pod/node-agent-bw5pf 0/1 CrashLoopBackOff 4 (27s ago) 2m5s
pod/velero-7d459ffc95-44sps 1/1 Running 0 2m5s
To run the three-pod node-agent DaemonSet on a Kubernetes cluster in TKGI, you must modify the node-agent DaemonSet spec and modify the hostpath
property.
To modify the node-agent DaemonSet:
Verify the three-pod node-agent DaemonSet:
kubectl get pod -n velero
For example:
$ kubectl get pod -n velero
NAME READY STATUS RESTARTS AGE
pod/node-agent-p5bdz 0/1 CrashLoopBackOff 4 3m8s
pod/node-agent-rbmnd 0/1 CrashLoopBackOff 4 3m8s
pod/node-agent-vcpjm 0/1 CrashLoopBackOff 4 3m8s
pod/velero-68f47744f5-lb5df 1/1 Running 0 3m8s
Run the following command:
kubectl edit daemonset node-agent -n velero
Change hostPath from /var/lib/kubelet/pods
to /var/vcap/data/kubelet/pods
:
- hostPath:
path: /var/vcap/data/kubelet/pods
Save the file.
To verify the three-pod node-agent DaemonSet:
kubectl get pod -n velero
For example:
kubectl get pod -n velero
NAME READY STATUS RESTARTS AGE
pod/node-agent-6ljm5 1/1 Running 0 23s
pod/node-agent-94cfd 1/1 Running 0 23s
pod/node-agent-brv77 1/1 Running 0 22s
pod/velero-7d459ffc95-44sps 1/1 Running 0 4m24s
If your Velero back up returns status=InProgress
for many hours, increase the limits and requests memory settings.
To increase limits and requests memory settings:
Run the following command:
kubectl edit deployment/velero -n velero
Change the limits and request memory settings from the default of 256Mi
and 128Mi
to 512Mi
and 256Mi
:
ports:
- containerPort: 8085
name: metrics
protocol: TCP
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 500m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
If you are working in an air-gapped environment, you can install Velero using an internal registry. For more information, see Air-gapped deployments in the Velero documentation.
credentials-minio
file exists. For more information, see Set Up the kubectl Context above.Download the Velero CLI and Velero with restic Docker images for your version of TKGI:
Note: You must use the container images signed by VMware to be eligible for support from VMware.
Push the Docker images into the internal registry. Adjust the variables as needed for your registry instance and preferences.
docker login harbor.example.com
docker load -i velero-plugin-for-aws-v1.7.1_vmware.1.tar.gz
docker tag vmware.io/velero-plugin-for-aws:v1.7.1_vmware.1
docker load -i velero-restic-restore-helper-v1.12.1+vmware.1.tar.gz
docker tag projects.registry.vmware.com/tkg/velero/velero-restic-restore-helper:v1.12.1_vmware.1 harbor.example.com/vmware-tanzu/velero-restic-restore-helper:v1.12.1_vmware.1
harbor.example.com/vmware-tanzu/velero-plugin-for-aws:v1.7.1_vmware.1
docker load -i velero-v1.12.1+vmware.1.tar.gz
docker tag projects.registry.vmware.com/tkg/velero/velero:v1.12.1_vmware.1 harbor.example.com/vmware-tanzu/velero:v1.12.1_vmware.1
docker push harbor.example.com/vmware-tanzu/velero-plugin-for-aws:v1.7.1_vmware.1
docker push harbor.example.com/vmware-tanzu/velero-restic-restore-helper:v1.12.1_vmware.1
docker push harbor.example.com/vmware-tanzu/velero:v1.12.1_vmware.1
Install Velero:
velero install --image harbor.example.com/vmware-tanzu/harbor.example.com/vmware-tanzu/velero:v1.12.1_vmware.1 \
--plugins harbor.example.com/vmware-tanzu/velero-plugin-for-aws:v1.7.1_vmware.1 \
--provider aws --bucket tkgi-velero --secret-file ./credentials-minio \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://20.20.224.27:9000,publicUrl=http://20.20.224.27:9000 --use-node-agent --default-volumes-to-fs-backup
For example:
$ velero install --image harbor.example.com/vmware-tanzu/harbor.example.com/vmware-tanzu/velero:v1.12.1_vmware.1 --plugins harbor.example.com/vmware-tanzu/velero-plugin-for-aws:v1.7.1_vmware.1 --provider aws --bucket tkgi-velero --secret-file ./credentials-minio --use-volume-snapshots=false --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://20.20.224.27:9000,publicUrl=http://20.20.224.27:9000 --use-node-agent --default-volumes-to-fs-backup
Velero is installed! Use 'kubectl logs deployment/velero -n velero' to view the status.
For more information about installing Velero, see On-Premises Environments in the Velero documentation.
After installing, configure the restic post-installation settings. Velero v1.10 sets its node-agent to restic
by default:
/var/lib/kubelet/pods
with /var/vcap/data/kubelet/pods
. Verify that the restic pods are running. For more information, see Modify the Host Path above.'- --restic-timeout=900m'
to spec.template.spec.containers
.(Optional) Adjust your restic node-agent Pod CPU and memory reserves: Depending on your requirements, you can adjust the CPU and memory reserves and limits for your Velero and restic Pods. For more information, see Adjust Velero Memory Limits (if necessary) above.
restic pod
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
velero pod
resources:
limits:
cpu: "1"
memory: 256Mi
requests:
cpu: 500m
memory: 128Mi