This topic describes how to install Velero for backing up and restoring Tanzu Kubernetes Grid Integrated Edition (TKGI)-provisioned Kubernetes workloads on vSphere.

Prerequisites

Ensure the following before installing Velero for backing up and restoring TKGI on vSphere:

  • Your clusters use the automatically installed vSphere CSI Driver. For more information, see Deploying and Managing Cloud Native Storage (CNS) on vSphere.
  • Allow Privileged is enabled in the plan for the cluster being backed up. For more information, see Plans in Installing Tanzu Kubernetes Grid Integrated Edition on vSphere.
  • You have read: Tanzu Kubernetes Workload Back Up and Restore Requirements in Backing Up and Restoring Tanzu Kubernetes Workloads Using Velero with Restic.
  • You have a Linux VM with sufficient storage to store several workload backups. You will install MinIO on this VM. For more information, see Quick start evaluation install with MinIO in the Velero documentation.
  • You have a TKGI Client VM (Linux) where CLI tools are installed, such as the TKGI CLI, kubectl, and others. You will install the Velero CLI on this client VM. If you do not have such a VM, you can install the Velero CLI locally but adjust the following installation steps to match your configuration.
  • The Kubernetes environment has internet access and can be reached by the client VM. If the environment does not have internet access, refer to Install Velero in an Air-Gapped Environment below.

Deploy an Object Store

The Velero back up procedure requires an object store as the back up destination for workload backups. Deploy and configure a MinIO Server on a Linux Ubuntu VM as the Velero backend object store. For more information, see Deploy an Object Store.

Install the Velero CLI on Your Workstation

To install the Velero CLI on your workstation:

  1. Download the Velero CLI Binary
  2. Install the Velero CLI

Download the Velero CLI Binary

To download the Velero CLI Binary:

  1. Download the supported version of the signed Velero binary for your version of TKGI from the TKGI product downloads page at myVMware. For more information about the currently supported Velero versions, see the Product Snapshot section of the Release Notes.

    Note: You must use the Velero binary signed by VMware to be eligible for support from VMware.

Install the Velero CLI

To install the Velero CLI on the TKGI client or on your local machine:

  1. Open a command line and change directory to the Velero CLI download.
  2. Unzip the download file:

    gunzip velero-linux-v1.8.1_vmware.1.gz
    
  3. Grant execute permissions to the Velero CLI:

    chmod +x velero-linux-v1.8.1_vmware.1
    
  4. Make the Velero CLI globally available by moving it to the system path:

    cp velero-linux-v1.8.1_vmware.1 /usr/local/bin/velero
    
  5. Verify the installation:

    velero version
    

    For example:

    $ velero version
    
    Client:
        Version: v1.8.1
    

Install Velero on the Target Kubernetes Cluster

To install the Velero pod on each Kubernetes cluster whose workloads you intend to back up, complete the following:

  1. Prerequisites
  2. Set Up the kubectl Context
  3. Install Velero
  4. Create a Velero vSphere Credential Secret
  5. Create the Velero vSphere Plugin Configuration File
  6. Install Velero vSphere Plugin
  7. Back up the VCP Volumes Migrated to vSphere CSI Driver
  8. Adjust Velero Memory Limits If Necessary

Prerequisites

The following steps require that:

  • You have installed MinIO as your backup object store. For more information, see Deploy an Object Store above.
  • Your Kubernetes cluster has internet access.

Set Up the kubectl Context

The Velero CLI context will automatically follow the kubectl context. Before running Velero CLI commands to install Velero on the target cluster, set the kubectl context:

  1. Retrieve the name of the MinIO bucket. For example, tkgi-velero.
  2. Get the AccessKey and SecretKey for the MinIO bucket. For example, AccessKey: 0XXNO8JCCGV41QZBV0RQ and SecretKey: clZ1bf8Ljkvkmq7fHucrKCkxV39BRbcycGeXQDfx.
  3. Verify kubectl works against the cluster. If needed, use tkgi get-credentials.
  4. Set the context for the target Kubernetes cluster so that the Velero CLI knows which cluster to work:

    tkgi get-credentials CLUSTER-NAME
    

    Where CLUSTER-NAME is the name of the cluster.

    For example:

    $ tkgi get-credentials cluster-1
    
    Fetching credentials for cluster cluster-1.
    Password: ********
    Context set for cluster cluster-1.
    
    You can now switch between clusters by using:
    $kubectl config use-context <cluster-name>
    

    You can also run kubectl config use-context CLUSTER-NAME to set context.

  5. To create a secrets file, create a file named credentials-minio. Update the file with the MinIO server access credentials that you collected above:

    [default]
    aws_access_key_id = ACCESS-KEY
    aws_secret_access_key = SECRET-KEY
    

    Where:

    • ACCESS-KEY is the AccessKey that you collected above.
    • SECRET-KEY is the SecretKey that you collected above.

    For example:

    [default]
    aws_access_key_id = 0XXNO8JCCGV41QZBV0RQ
    aws_secret_access_key = clZ1bf8Ljkvkmq7fHucrKCkxV39BRbcycGeXQDfx
    
  6. Save the file.

Install Velero

  1. Install Velero on the target Kubernetes cluster:

    velero install \
    --image projects.registry.vmware.com/tkg/velero/velero:v1.8.1_vmware.1
    --provider aws --bucket tkgi-velero \
    --secret-file ./credentials-minio \
    --plugins "projects.registry.vmware.com/tkg/velero/velero-plugin-for-aws:v1.4.1_vmware.1" \
    --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://IP-ADDRESS:PORT,publicUrl=http://IP-ADDRESS:PORT \
    --snapshot-location-config region=minio
    

    Where:

    • IP-ADDRESS is the IP address that is used to connect to the MinIO server.
    • PORT is the number of the port that is used to connect to the MinIO server.

    For example:

    $ velero install --image projects.registry.vmware.com/tkg/velero/velero:v1.8.1_vmware.1 --provider aws --bucket tkgi-velero --secret-file ./credentials-minio --plugins "projects.registry.vmware.com/tkg/velero/velero-plugin-for-aws:v1.4.1_vmware.1" --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://20.20.233.44:9000,publicUrl=http://20.20.233.44:9000 --snapshot-location-config region=minio
    CustomResourceDefinition/backups.velero.io: created
    ...
    Waiting for resources to be ready in cluster...
    ...
    Velero is installed! Use 'kubectl logs deployment/velero -n velero' to view the status.
    

    Note: You must include the –snapshot-location-config region configuration parameter.

  2. Verify the installation of Velero:

    kubectl logs deployment/velero -n velero
    
  3. Verify the velero namespace:

    kubectl get ns
    

    For example:

    $ kubectl get ns
    
    NAME              STATUS   AGE
    default           Active   13d
    kube-node-lease   Active   13d
    kube-public       Active   13d
    kube-system       Active   13d
    pks-system        Active   13d
    velero            Active   2m38s
    

Create a Velero vSphere Credential Secret

  1. Create the csi-vsphere.conf file with the following details:

    [Global]
    cluster-id = "CLUSTER-NAME"                
    [VirtualCenter "IP-ADDRESS"]    
    user = "USERNAME"       
    password = "PASSWORD" 
    port = "443" 
    

    Where:

    • CLUSTER-NAME is the name of your cluster.
    • IP-ADDRESS is the IP address of the vCenter Server.
    • USERNAME is the user name that you want to use.
    • PASSWORD is the user name that you want to use.
  2. Create the secret:

    kubectl -n NAMESPACE create secret generic velero-vsphere-config-secret --from-file=csi-vsphere.conf
    

    Where NAMESPACE is the Velero namespace.

Create the Velero vSphere Plugin Configuration File

  1. Create a ConfigMap YAML file. For example configmap.yaml.
  2. Modify the ConfigMap file with the following:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: velero-vsphere-plugin-config
      data:
      cluster_flavor:           "VANILLA"
      vsphere_secret_name:      "SECRET-NAME"
      vsphere_secret_namespace: "SECRET-NAMESPACE"  #optional, default is velero
    

    Where:

    • SECRET-NAME is the name you applied to your Velero secret.
    • SECRET-NAMESPACE is the secret namespace. For example velero.
  3. Save the ConfigMap file.

  4. Apply the ConfigMap:

    kubectl apply -f CONFIGMAP-FILE -n SECRET-NAMESPACE
    

    Where:

    • CONFIGMAP-FILE is the name of your ConfigMap file. For example configmap.yaml.
    • SECRET-NAMESPACE is the secret namespace. For example velero.

Install Velero vSphere Plugin

  1. Install the Velero plugin for vSphere:

    velero plugin add projects.registry.vmware.com/tkg/velero/velero-plugin-for-vsphere:v1.4.0_vmware.1
    
  2. Configure the Velero snapshot location:

    velero snapshot-location create vsl-vsphere --provider velero.io/vsphere
    
  3. Verify the velero pod:

    kubectl get all -n velero
    

    For example:

    $ kubectl get all -n velero
    
    NAME                         READY   STATUS             RESTARTS   AGE
    pod/velero-8dc7498d9-9v7x4   1/1     Running            0          30s
    
  4. Verify the snaphost plugin:

    velero plugin get
    

    Confirm the vsphere VolumeSnapshotter plugin is included in the returned list.

    For example:

    $ velero plugin get
    
    NAME                               KIND
    velero.io/crd-remap-version        BackupItemAction
    velero.io/pod                      BackupItemAction
    velero.io/pv                       BackupItemAction
    velero.io/service-account          BackupItemAction
    velero.io/vsphere-pvc-backupper    BackupItemAction
    velero.io/vsphere-pvc-deleter      DeleteItemAction
    velero.io/aws                      ObjectStore
    velero.io/add-pv-from-pvc          RestoreItemAction
    velero.io/add-pvc-from-pod         RestoreItemAction
    velero.io/change-pvc-node-selector RestoreItemAction
    velero.io/change-storage-class     RestoreItemAction
    velero.io/cluster-role-bindings    RestoreItemAction
    velero.io/crd-preserve-fields      RestoreItemAction
    velero.io/init-restore-hook        RestoreItemAction
    velero.io/job                      RestoreItemAction
    velero.io/pod                      RestoreItemAction
    velero.io/role-bindings            RestoreItemAction
    velero.io/service                  RestoreItemAction
    velero.io/service-account          RestoreItemAction
    velero.io/vsphere-pvc-restorer     RestoreItemAction
    velero.io/aws                      VolumeSnapshotter
    velero.io/vsphere                  VolumeSnapshotter
    

Back up the VCP Volumes Migrated to vSphere CSI Driver

Follow this step if you will back up the VCP volumes that were migrated to vSphere CSI Driver.

  1. Create the velero-vsphere-plugin-feature-states.yaml ConfigMap file.
  2. Modify the ConfigMap file with the following:

    apiVersion: v1
    data:
      csi-migrated-volume-support: "true"
      decouple-vsphere-csi-driver: "true"
      local-mode: "false"
    kind: ConfigMap
    metadata:
    name: velero-vsphere-plugin-feature-states
    
  3. Save the ConfigMap file.

  4. Apply the ConfigMap:

    kubectl apply -f velero-vsphere-plugin-feature-states.yaml -n velero
    

Adjust Velero Memory Limits If Necessary

If your Velero back up returns status=InProgress for many hours, increase the limits and requests memory settings. To do this:

  1. Run the following command:

    kubectl edit deployment/velero -n velero
    
  2. Change the limits and request memory settings from the default of 256Mi and 128Mi to 512Mi and 256Mi:

    ports:
    - containerPort: 8085
      name: metrics
      protocol: TCP
    resources:
      limits:
        cpu: "1"
        memory: 512Mi
      requests:
        cpu: 500m
        memory: 256Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    

Install Velero in an Air-Gapped Environment

If you are working in an air-gapped environment, you can install Velero using an internal registry. For more information, see Air-gapped deployments in the Velero documentation.

Prerequisites

Ensure the following before installing Velero in an air-gapped environment:

  • A private container registry is installed and configured. The procedure below uses a VMware Harbor Container Registry.
  • Docker is installed on the workstation or TKGI jump host.
  • kubectl context has been set and the MinIO credentials-minio file exists. For more information, see Set Up the kubectl Context above.

Procedure

  1. Open the VMware Velero downloads page for your version of TKGI linked to from the Product Snapshot of the Release Notes.
  2. Download the Velero CLI and Velero Plugin for vSphere images for your version of TKGI:

    backup-driver-v1.4.0_vmware.1.tar.gz
    data-manager-for-plugin-v1.4.0_vmware.1.tar.gz
    velero-plugin-for-vsphere-v1.4.0_vmware.1.tar.gz
    

    Note: You must use the container images signed by VMware to be eligible for support from VMware.

  3. Push the Docker images into the internal registry. Adjust the variables as needed for your registry instance and preferences.

    docker login harbor.example.com
    docker load -i backup-driver-v1.4.0_vmware.1.tar.gz
    docker tag vmware-tanzu/backup-driver:v1.4.0_vmware.1  harbor.example.com/vmware-tanzu/backup-driver:v1.4.0_vmware.1
    docker load -i velero-plugin-for-vsphere-v1.4.0_vmware.1.tar.gz
    docker tag vmware-tanzu/velero-plugin-for-vsphere:v1.4.0_vmware.1  harbor.example.com/vmware-tanzu/velero-plugin-for-vsphere:v1.4.0_vmware.1
    docker load -i data-manager-for-plugin-v1.4.0_vmware.1.tar.gz
    docker tag vmware-tanzu/data-manager-for-plugin:v1.4.0_vmware.1  harbor.example.com/vmware-tanzu/data-manager-for-plugin:v1.4.0_vmware.1
    docker push harbor.example.com/vmware-tanzu/backup-driver:v1.4.0_vmware.1
    docker push harbor.example.com/vmware-tanzu/velero-plugin-for-vsphere:v1.4.0_vmware.1
    docker push harbor.example.com/vmware-tanzu/data-manager-for-plugin:v1.4.0_vmware.1
    
  4. Install Velero:

    velero install --image harbor.example.com/vmware-tanzu/velero:v1.8.1_vmware.1 \
    --plugins harbor.example.com/vmware-tanzu/velero-plugin-for-aws-v1.4.1_vmware.1 \
    --provider aws --bucket tkgi-velero --secret-file ./credentials-minio \
    --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://IP-ADDRESS:PORT,publicUrl=http://IP-ADDRESS:PORT --snapshot-location-config region=minio
    

    Where:

    • IP-ADDRESS is the IP address that is used to connect to the MinIO server.
    • PORT is the number of the port that is used to connect to the MinIO server.

    For example:

    $ velero install --image harbor.example.com/vmware-tanzu/harbor.example.com/vmware-tanzu/velero:v1.8.1_vmware.1 --plugins harbor.example.com/vmware-tanzu/velero-plugin-for-aws-v1.4.1_vmware.1 --provider aws --bucket tkgi-velero --secret-file ./credentials-minio  --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://20.20.224.27:9000,publicUrl=http://20.20.224.27:9000 --snapshot-location-config region=minio
    Velero is installed! Use 'kubectl logs deployment/velero -n velero' to view the status.
    

    For more information about installing Velero, see On-Premises Environments in the Velero documentation.

  5. Complete the steps in Create the Velero vSphere Plugin Configuration File above. You must create the Velero vSphere plugin configuration file before installing the Velero plugin for vSphere.

  6. Install the Velero plugin for vSphere:

    velero plugin add harbor.example.com/vmware-tanzu/velero-plugin-for-vsphere:v1.4.0_vmware.1
    
check-circle-line exclamation-circle-line close-line
Scroll to top icon