You can use the Velero Plugin for vSphere to backup and restore workloads running on a TKGS cluster by installing the Velero Plugin for vSphere on that cluster.
Overview
Prerequisite: Install the Velero Plugin for vSphere on Supervisor
Installing the Velero Plugin for vSphere on a TKGS cluster requires the Supervisor to have the Velero Plugin for vSphere installed. In addition, the Supervisor must be configured with NSX networking. See .
Storage Requirement
To take a TKG Service cluster backup, you need a storage backend as described herein. If you are backing up multiple clusters, you should not use the same storage backend for different cluster backups. If you do share the storage backend, backup objects will be synced. You must use a different storage backend to avoid data escape.
Step 1: Install the Velero CLI on a Linux Workstation
The Velero CLI is the standard tool for interfacing with Velero. The Velero CLI provides more functionality than the Velero Plugin for vSphere CLI (velero-vsphere
) and is required for backing up and restoring Tanzu Kubernetes cluster workloads.
Install the Velero CLI on a Linux workstation. Ideally this is the same jump host where you run associated CLIs for your vSphere IaaS control plane environment, including kubectl
, kubectl-vsphere
, and velero-vsphere
.
The Velero version numbers are presented as X.Y.Z
. Refer to the Velero Compatibility Matrix for the specific versions to use, and substitute accordingly when running the commands.
- Run the following commands:
$ wget https://github.com/vmware-tanzu/velero/releases/download/vX.Y.Z/velero-vX.Y.Z-linux-amd64.tar.gz $ gzip -d velero-vX.Y.Z-linux-amd64.tar.gz && tar -xvf velero-vX.Y.Z-linux-amd64.tar $ export PATH="$(pwd)/velero-vX.Y.Z-linux-amd64:$PATH" $ which velero /root/velero-vX.Y.Z-linux-amd64/velero
- Verify the installation of the Velero CLI.
velero version Client: Version: vX.Y.Z
Step 2: Get the S3-Compatible Bucket Details
For convenience, the steps assume that you are using the same S3-compatible object store that you configured when you installed the Velero Plugin for vSphere on the Supervisor. In production you may want to create a separate object store.
Data Item | Example Value |
---|---|
s3Url |
|
aws_access_key_id | ACCESS-KEY-ID-STRING |
aws_secret_access_key | SECRET-ACCESS-KEY-STRING |
s3-credentials
with the following information. You will reference this file when you install the
Velero Plugin for vSphere.
[default] aws_access_key_id = ACCESS-KEY-ID-STRING aws_secret_access_key = SECRET-ACCESS-KEY-STRING
Step 3 Option A: Install the Velero Plugin for vSphere on the TKG Cluster Using a Label (New Method)
- Verify that the backup storage location is accessible.
- Verify that the Velero vSphere Operator Core Supervisor Service is activated.
kubectl get ns | grep velero svc-velero-domain-c9 Active 18d
- Verify that a Kubernetes namespace named
velero
is created on Supervisor.kubectl get ns | grep velero svc-velero-domain-c9 Active 18d velero Active 1s
- Verify that the Velero Plugin for vSphereSupervisor Service is enabled on Supervisor.
velero version Client: Version: v1.11.1 Git commit: bdbe7eb242b0f64d5b04a7fea86d1edbb3a3587c Server: Version: v1.11.1
kubectl get veleroservice -A NAMESPACE NAME AGE velero default 53m
velero backup-location get NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT default aws velero Available 2023-11-20 14:10:57 -0800 PST ReadWrite true
- Enable Velero for the target TKG cluster by adding the velero label to the cluster.
kubectl label cluster
CLUSTER-NAME
--namespaceCLUSTER-NS
velero.vsphere.vmware.com/enabled=trueNote: This is done from the vSphere Namespace when the cluster is provisioned. - Verify that Velero is installed and ready for the cluster.
kubectl get ns NAME STATUS AGE ... velero Active 2m <-- velero-vsphere-plugin-backupdriver Active 2d23h
kubectl get all -n velero NAME READY STATUS RESTARTS AGE pod/backup-driver-5945d6bcd4-gtw9d 1/1 Running 0 17h pod/velero-6b9b49449-pq6b4 1/1 Running 0 18h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/backup-driver 1/1 1 1 17h deployment.apps/velero 1/1 1 1 18h NAME DESIRED CURRENT READY AGE replicaset.apps/backup-driver-5945d6bcd4 1 1 1 17h replicaset.apps/velero-6b9b49449 1 1 1 18h
velero version Client: Version: v1.11.1 Git commit: bdbe7eb242b0f64d5b04a7fea86d1edbb3a3587c Server: Version: v1.11.1
Step 3 Option B: Install the Velero Plugin for vSphere on the TKG Cluster Manually (Legacy Method)
You are going to use the Velero CLI to install the Velero Plugin for vSphere on the target TKG cluster that you want to backup and restore.
kubectl
context. Before running Velero CLI commands to install Velero and the
Velero Plugin for vSphere on the target cluster, be sure to set the
kubectl
context to the target cluster.
- Using the vSphere Plugin for kubectl, authenticate with the Supervisor.
- Set the
kubectl
context to the target TKG cluster.kubectl config use-context TARGET-TANZU-KUBERNETES-CLUSTER
- On the TKG cluster, create a configmap for the Velero plugin named
velero-vsphere-plugin-config.yaml
.apiVersion: v1 kind: ConfigMap metadata: name: velero-vsphere-plugin-config data: cluster_flavor: GUEST
Apply the configmap on the TKG cluster.kubectl apply -n <velero-namespace> -f velero-vsphere-plugin-config.yaml
If you do not install the configmap, you receive the following error when you try to install the Velero Plugin for vSphere.Error received while retrieving cluster flavor from config, err: configmaps "velero-vsphere-plugin-config" not found Falling back to retrieving cluster flavor from vSphere CSI Driver Deployment
- Run the following Velero CLI command to install Velero on the target cluster.
Replace the placeholder values for the BUCKET-NAME, REGION (two instances), and s3Url fields with the appropriate values. If you deviated from any of the preceding instructions, adjust those values as well, such as the name or location of the secrets file, the name of the manually created
velero
namespace, etc../velero install --provider aws \ --bucket BUCKET-NAME \ --secret-file ./s3-credentials \ --features=EnableVSphereItemActionPlugin \ --plugins velero/velero-plugin-for-aws:vX.Y.Z \ --snapshot-location-config region=REGION \ --backup-location-config region=REGION,s3ForcePathStyle="true",s3Url=http://my-s3-store.example.com
- Install the Velero Plugin for vSphere on the target cluster. The installed Velero will communicate with Kubernetes API server to install the plugin.
velero plugin add vsphereveleroplugin/velero-plugin-for-vsphere:vX.Y.Z
Addendum: Uninstall the Velero Plugin for vSphere from the TKG Cluster
- Set the
kubectl
context to the target Tanzu Kubernetes cluster.kubectl config use-context TARGET-TANZU-KUBERNETES-CLUSTER
- To uninstall the plugin, run the following command to remove the InitContainer of velero-plugin-for-vsphere from the Velero deployment.
velero plugin remove vsphereveleroplugin/velero-plugin-for-vsphere:vX.Y.Z
- To complete the process, delete the Backup Driver deployment and related CRDs.
kubectl -n velero delete deployment.apps/backup-driver
kubectl delete crds \ backuprepositories.backupdriver.cnsdp.vmware.com \ backuprepositoryclaims.backupdriver.cnsdp.vmware.com \ clonefromsnapshots.backupdriver.cnsdp.vmware.com \ deletesnapshots.backupdriver.cnsdp.vmware.com \ snapshots.backupdriver.cnsdp.vmware.com
kubectl delete crds uploads.datamover.cnsdp.vmware.com downloads.datamover.cnsdp.vmware.com