You can backup and restore across clusters using NFS File Server.

To perform backup and restore across clusters, set acrossClusters to true on the source cluster, which can make the backup visible to other clusters.
apiVersion: tcx.vmware.com/v1
kind: Backup
metadata:
  name: group-backup-tps
  namespace: tps-system
spec:
  pauseIntegrityCheck: false
  storage:
    minio:
      bucket: vmware-tcsa-backup
      endpoint: minio.tcsa-system.svc.cluster.local:9000
      secretRef:
        name: minio-secrets
        namespace: tcsa-system
        accessKey:
          key: root-user
        secretKey:
          key: root-password
#      tls:
#        secretName: minio-selfsigned-crt
#        namespace: tcsa-system
#        tlsCrt:
#          key: tls.crt
#        caCrt:
#          key: ca.crt
  acrossClusters:
    enabled: true
  cluster:
    name: tcsa2.4.0
  components:
    postgres:
      timeout: 10m
      config:
        endpoint:
          host: postgres-cluster.tps-system.svc.cluster.local
          port: 5432
        adminSecret:
          name: postgres-db-secret
          namespace: tps-system
      dbs:
        - adminservice
        - airflow
        - alarmservice
        - analyticsservice
        - collector
        - grafana
        - keycloak
        - kpiservice
        - remediation
        - spe
        - svix_server
        - dm_upgrade
        - enrichment
#  only uncomment if you have enabled Grafana scheduled export feature
#        - grafana_scheduler

  retentionPolicy:
    numberOfDaysToKeep: 45

---
apiVersion: tcx.vmware.com/v1
kind: Backup
metadata:
  name: group-backup-tcsa
  namespace: tcsa-system
spec:
  pauseIntegrityCheck: false
  storage:
    minio:
      bucket: vmware-tcsa-backup
      endpoint: minio.tcsa-system.svc.cluster.local:9000
      secretRef:
        name: minio-secrets
        namespace: tcsa-system
        accessKey:
          key: root-user
        secretKey:
          key: root-password
#      tls:
#        secretName: minio-selfsigned-crt
#        namespace: tcsa-system
#        tlsCrt:
#          key: tls.crt
#        caCrt:
#          key: ca.crt
  acrossClusters:
    enabled: true
  cluster:
    name: tcsa2.4.0
  components:
    collectors:
      timeout: 10m
      config:
        endpoint:
          scheme: http
          host: collector-manager.tcsa-system.svc.cluster.local
          port: 12375
          basePath: /dcc/v1/
        authenticationSecret:
          name: collectors-secrets
          namespace: tcsa-system
          usernameKey:
            key: COLLECTORS_USERNAME
          passwordKey:
            key: COLLECTORS_PASSWORD
    elastic:
      timeout: 30m
      config:
        endpoint:
          host: elasticsearch.tcsa-system.svc.cluster.local
          port: 9200
          scheme: https
        region: ap-south-1
      tls:
        secretName: elasticsearch-cert
        namespace: tcsa-system
        tlsCrt:
          key: tls.crt
        caCrt:
          key: ca.crt
      authentication:
        name: elasticsearch-secret-credentials
        namespace: tcsa-system
        usernameKey:
          key: ES_USER_NAME
        passwordKey:
          key: ES_PASSWORD
      indexList:
        - vsa_chaining_history-*
        - vsa_events_history-*
        - vsa_audit-*
        - audit-*
        - vsarole,policy,userpreference,vsa_catalog
# Uncomment these indexes if you want to take backup
#        - vsametrics-*
#        - gateway-mappings
#        - mapping-metadata,mnr-metadata
# set 'removeAndAddRepository: true' when doing Backup/Restore, to cleanup the respository.
#      removeAndAddRepository: true
    kubernetesResources:
      timeout: 10m
      resources:
        - groupVersionResource:
            group: ""
            version: "v1"
            resource: "secrets"
          nameList:
            - name: "spe-pguser"
              namespace: "tcsa-system"
        - groupVersionResource:
            group: ""
            version: "v1"
            resource: "configmaps"
          nameList:
            - name: "product-info"
              namespace: "tcsa-system"
    zookeeper:
      timeout: 10m
      endpoint:
        host: zookeeper.tcsa-system.svc.cluster.local
        port: 2181
      paths:
        - path: /vmware/vsa/gateway
        - path: /vmware/vsa/smarts
#  Uncomment the zookeeper path for NCM backup
#        - path: /vmware/vsa/ncm
  retentionPolicy:
    numberOfDaysToKeep: 45
Note: If you enable the Schedule Export Report feature, you must add or uncomment - grafana_scheduler in the backup configuration file. So that Schedule Export Report configurations are backed up as part of backup restore.

To enable backup and restore functionality across clusters, set acrossClusters to true on the source cluster. This setting enables the backup to be accessible and visible to other clusters.

You can enable acrossCluster functionality post backup creation by patching the backup using the following command:
kubectl patch backup <backup-name> --type='json' -p='[{"op": "replace", "path": "/spec/acrossClusters/enabled", "value": true}]'
After successfully creating the backup, configure MinIO on the target cluster with identical settings to those of the source cluster. Then, initiate a sync operation using the following example configuration.
apiVersion: tcx.vmware.com/v1
kind: SyncBackup
metadata:
  name: sync-backup-tps
  namespace: tps-system
spec:
  overrideExisting: false       # set this if the backup with same name already exists in the cluster  # setting this to true will not delete the data
  filter:
    componentList:
      - postgres
    backupList:
      - group-backup
  pauseIntegrityCheck: true
  overrideNamespace:
    targetNamespace: tps-system

  #Uncomment the below two lines ONLY if the backup was taken on either 2.3.1 / 2.4.0. Set "name" to tcsa2.3.1 if the backup was taken on 2.3.1, or tcsa2.4.0 if taken on 2.4.0
  #cluster:
  #  name: tcsa2.4.0

  storage:
    minio:
      bucket: vmware-tcsa-backup
      endpoint: minio.tcsa-system.svc.cluster.local:9000
      secretRef:
        name: minio-secrets
        namespace: tcsa-system
        accessKey:
          key: root-user
        secretKey:
          key: root-password

---
apiVersion: tcx.vmware.com/v1
kind: SyncBackup
metadata:
  name: sync-backup-tcsa
  namespace: tcsa-system
spec:
  overrideExisting: false      # set this if the backup with same name already exists in the cluster  # setting this to true will not delete the data
  filter:
    componentList:
      - elasticsearch
      - collectors
      - zookeeper
      - kubernetesResources
    backupList:
      - group-backup
  pauseIntegrityCheck: true
  overrideNamespace:
    targetNamespace: tcsa-system
  #Uncomment the below two lines ONLY if the backup was taken on either 2.3.1 / 2.4.0. Set "name" to tcsa2.3.1 if the backup was taken on 2.3.1, or tcsa2.4.0 if taken on 2.4.0
  #cluster:
  #  name: tcsa2.4.0
  storage:
    minio:
      bucket: vmware-tcsa-backup
      endpoint: minio.tcsa-system.svc.cluster.local:9000
      secretRef:
        name: minio-secrets
        namespace: tcsa-system
        accessKey:
          key: root-user
        secretKey:
          key: root-password
After the sync operation is completed, you should observe the status as SUCCESSFUL. The backups stored in NFS File Server becomes accessible in the new cluster. It is essential to perform the sync backup process before initiating the restore operation. For more information, see the Sync Backup section.
~> | kubectl get syncbackup.tcx.vmware.com/sync-backup -A
NAMESPACE   NAME               STATUS       CURRENT STATE   READY   AGE     MESSAGE
default     sync-backup-tcsa   SUCCESSFUL   syncBackup      True    4h51m   synced: 1, skipped: 0, failed: 0
default     sync-backup-tps    SUCCESSFUL   syncBackup      True    4h51m   synced: 1, skipped: 0, failed: 0

In the event of a failure, the MESSAGE field will be populated with the corresponding error message. Additionally, backups are skipped if the cluster already contains a backup with the same name.