velero: Velero backup create - Restic responds The specified key does not exist
What steps did you take and what happened:
I deploy Velero + restic with Minio. I use Velero 1.0 and followed https://velero.io/docs/v1.0.0/get-started/ & https://velero.io/docs/v1.0.0/restic/
I’ve followed these steps on Minishift with success. Now moving to an onprem Openshift cluster but I’m facing an error when trying to create a backup `velero backup create …``
My velero install looks like
velero install \
--provider aws \
--bucket velero \
--secret-file ./credentials \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio:9000,publicUrl=https://minio.xx.net \
--use-restic
I have deployed minio with the standard template https://github.com/heptio/velero/blob/master/examples/minio/00-minio-deployment.yaml Minio is accessible via an exposed route.
What did you expect to happen:
After annotating my pod with backup.velero.io/backup-volumes=YOUR_VOLUME_NAME_1 I’ve run velero backup create httpd -l app=httpd-ex
which has a vsphere pv and expecting to have my objects + pv backed up to minio. but this is not the case. I get an error log from one of my restic pod
oc logs restic-l9g7d
Updating certificates in /etc/ssl/certs...
6 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
time="2019-08-22T13:21:08Z" level=info msg="Setting log-level to INFO"
time="2019-08-22T13:21:08Z" level=info msg="Starting Velero restic server v1.0.0 (72f5cadc3a865019ab9dc043d4952c9bfd5f2ecb)" logSource="pkg/cmd/cli/restic/server.go:57"
time="2019-08-22T13:21:08Z" level=info msg="Starting controllers" logSource="pkg/cmd/cli/restic/server.go:145"
time="2019-08-22T13:21:08Z" level=info msg="Controllers started successfully" logSource="pkg/cmd/cli/restic/server.go:186"
time="2019-08-22T13:21:08Z" level=info msg="Starting controller" controller=pod-volume-backup logSource="pkg/controller/generic_controller.go:76"
time="2019-08-22T13:21:08Z" level=info msg="Waiting for caches to sync" controller=pod-volume-backup logSource="pkg/controller/generic_controller.go:79"
time="2019-08-22T13:21:08Z" level=info msg="Starting controller" controller=pod-volume-restore logSource="pkg/controller/generic_controller.go:76"
time="2019-08-22T13:21:08Z" level=info msg="Waiting for caches to sync" controller=pod-volume-restore logSource="pkg/controller/generic_controller.go:79"
time="2019-08-22T13:21:08Z" level=info msg="Caches are synced" controller=pod-volume-restore logSource="pkg/controller/generic_controller.go:83"
time="2019-08-22T13:21:08Z" level=info msg="Caches are synced" controller=pod-volume-backup logSource="pkg/controller/generic_controller.go:83"
time="2019-08-22T13:22:40Z" level=info msg="Backup starting" backup=velero/httpd-tobackup8 controller=pod-volume-backup logSource="pkg/controller/pod_volume_backup_controller.go:171" name=httpd-tobackup8-hmdgn namespace=velero
time="2019-08-22T13:22:40Z" level=error msg="Error running command=restic backup --repo=s3:https://minio.xx.net/velero/restic/velero --password-file=/tmp/velero-restic-credentials-velero198456678 --cache-dir=/scratch/.cache/restic . --tag=pod=httpd-tobackup-2-gpxx4 --tag=pod-uid=9aaa99a4-c4da-11e9-b2b7-005056bb85aa --tag=volume=httpd-storage --tag=backup=httpd-tobackup8 --tag=backup-uid=ea2afa29-c4df-11e9-bc6c-005056bb17e1 --tag=ns=velero --host=velero, stdout=open repository\n, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\nIs there a repository at the following location?\ns3:https://minio.xx.net/velero/restic/velero\n" backup=velero/httpd-tobackup8 controller=pod-volume-backup error="exit status 1" error.file="/go/src/github.com/heptio/velero/pkg/controller/pod_volume_backup_controller.go:232" error.function="github.com/heptio/velero/pkg/controller.(*podVolumeBackupController).processBackup" logSource="pkg/controller/pod_volume_backup_controller.go:232" name=httpd-tobackup8-hmdgn namespace=velero
The podvolumebackup resource answer the same error message.
The output of the following commands will help us better understand what’s going on: (Pasting long output into a GitHub gist or other pastebin is fine.)
velero backup logs <backupname>
https://gist.github.com/ludovicbonivert/5785452b9aec22430c179190432f1708
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
I’ve added my enterprise certs to the restic pod since they were complaining about unkown authority.
Environment:
- Velero version (use
velero version
): v1.0 - Kubernetes version (use
kubectl version
): v1.11
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 15 (5 by maintainers)
I see from your
velero install
command that you have ans3Url
ofhttp://minio:9000
, but it looks like your restic repo was initialized using thepublicUrl
value, so I’m wondering if you had a previous install where thepublicUrl
value was used for thes3Url
?The first thing I’d try is:
velero
bucket in minIO (assuming you don’t have anything critical in there)kubectl -n velero delete resticrepositories --all
velero backup create
, which will re-create the restic repo.I’d like to see if it initializes the repo with the
s3Url
value rather than thepublicUrl
value, and if that fixes things at all.error="fork/exec /usr/bin/restic: permission denied"
This was linked to the restic pods and velero pod being run as restricted scc. I had to add securityContext: privileged: true to the velero deployment & restic daemonset. Now it works flawlessly in any namespace!