velero: Restic Repo NotReady v1.1.0 "unable to open cache: MkdirAll: mkdir /nonexistent: permission denied\n"

What steps did you take and what happened:

Restic repo seem to go to NotReady after upgrade

Error messages:-

time="2019-10-01T18:20:14Z" level=debug msg="Running processQueueItem" controller=restic-repository key=velero/redacted-pod-default-2fjtj logSource="pkg/controller/restic_repository_controller.go:102"
time="2019-10-01T18:20:14Z" level=debug msg="Checking repository for stale locks" controller=restic-repository logSource="pkg/controller/restic_repository_controller.go:131" name=redacted-pod-default-2fjtj namespace=velero
time="2019-10-01T18:20:15Z" level=debug msg="Ran restic command" command="restic unlock --repo=s3:s3-us-east-1.amazonaws.com/redacted-bucket/restic/redacted-pod --password-file=/tmp/velero-restic-credentials-redacted-pod163491815" logSource="pkg/restic/repository_manager.go:276" repository=redacted-pod stderr="unable to open cache: MkdirAll: mkdir /nonexistent: permission denied\n" stdout="successfully removed locks\n"

What did you expect to happen:

Repositories are Ready

The output of the following commands will help us better understand what’s going on: (Pasting long output into a GitHub gist or other pastebin is fine.)

  • kubectl logs deployment/velero -n velero
  • velero backup describe <backupname> or kubectl get backup/<backupname> -n velero -o yaml
  • velero backup logs <backupname>
  • velero restore describe <restorename> or kubectl get restore/<restorename> -n velero -o yaml
  • velero restore logs <restorename>

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Environment:

  • Velero version (use velero version):
$ velero version
Client:
	Version: v1.1.0
	Git commit: a357f21aec6b39a8244dd23e469cc4519f1fe608
Server:
	Version: v1.1.0
  • Velero features (use velero client config get features):
$ velero client config get features
features: <NOT SET>
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration: EKS
  • OS (e.g. from /etc/os-release):

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 17 (8 by maintainers)

Most upvoted comments

I have set the memory requests to 2Gi and limits to 4Gi. I’ve not seen any pods evicted but I see this was the resolution in https://github.com/vmware-tanzu/velero/issues/1857.

Will monitor