velero: Restoring from AWS S3 in a different region is not working(File System Backup)

What steps did you take and what happened:

I have two EKS clusters in the regions eu-west-1 (primary) and eu-central-1 (backup). I created a backup using Velero with Restic (I’m using EFS storage) and stored it in an S3 bucket in the eu-west-1 region. Now, I am trying to restore resources, including PV and PVC, in the eu-central-1 cluster using the backups from eu-west-1. However, the restore operation is currently failing:

time="2023-08-30T20:13:43Z" level=error msg="unable to successfully complete pod volume restores of pod's volumes" error="backup repository is not ready: error running command=restic init --repo=s3:s3-eu-west-1.amazonaws.com/velero-tc-eks-backups-dev/restic/my-test-backups --password-file=/tmp/credentials/velero/velero-repo-credentials-repository-password --cache-dir=/scratch/.cache/restic, stdout=, stderr=Fatal: create repository at s3:s3-eu-west-1.amazonaws.com/velero-tc-eks-backups-dev/restic/my-test-backups failed: client.BucketExists: 301 Moved Permanently\n\n: exit status 1" logSource="pkg/restore/restore.go:1699" restore=velero/test-20230830221335

backupstoragelocations.velero.io:

➜ k -n velero get backupstoragelocations.velero.io default -o yaml
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
  annotations:
    meta.helm.sh/release-name: velero
    meta.helm.sh/release-namespace: velero
  creationTimestamp: "2023-08-30T18:33:01Z"
  generation: 250
  labels:
    app.kubernetes.io/instance: velero
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: velero
    helm.sh/chart: velero-5.0.2
  name: default
  namespace: velero
  resourceVersion: "14228053"
  uid: 1483aea5-0a73-4edf-a58e-791e8eba6083
spec:
  accessMode: ReadWrite
  config:
    region: eu-west-1
  default: true
  objectStorage:
    bucket: velero-tc-eks-backups-dev
  provider: aws
status:
  lastSyncedTime: "2023-08-30T20:54:53Z"
  lastValidationTime: "2023-08-30T20:55:13Z"
  phase: Available

What did you expect to happen: I expect the restoration to be successful, just as it is for clusters within the same region.

The following information will help us better understand what’s going on: bundle-2023-08-31-11-14-04.tar.gz

Anything else you would like to add:

Environment:

  • Velero version (use velero version): velero version Client: Version: v1.11.1 Git commit: - Server: Version: v1.11.1
  • Velero features (use velero client config get features): ➜ velero client config get features features: <NOT SET>
  • Kubernetes version (use kubectl version): Client Version: v1.24.1 Kustomize Version: v4.5.4 Server Version: v1.23.17-eks-2d98532
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): Amazon Linux 2

Vote on this issue!

This is an invitation to the Velero community to vote on issues, you can see the project’s top voted issues listed here.
Use the “reaction smiley face” up to the right of this comment to vote.

  • 👍 for “I would like to see this bug fixed as soon as possible”
  • 👎 for “There are more important bugs to focus on right now”

About this issue

  • Original URL
  • State: open
  • Created 10 months ago
  • Reactions: 3
  • Comments: 15 (6 by maintainers)

Most upvoted comments

Hi @natkondrashova and @s3than,

I just reproduced this issue by using IRSA as yours both in Velero 1.11 and Velero main, so this is a bug, we will find the root cause and fix it.

Thanks for reporting this issue!

BTW, in case you have no preference to use Restic for file system backup, you can use Kopia in Velero 1.12-RC.1 or Velero main to backup file system pod volumes, because using Kopia will not meet this issue both in Velero 1.12.x and Velero main. And the default uploader is Kopia instead of Restic since Velero 1.12.X.

Kopia still has issue in Velero 1.11 since an fixed issue is not fixed in Velero 1.11.x, we will have a decision if to fix it in Velero 1.11.x soon.

Thanks for your effort and the update @danfengliu.