velero: Restore is in PartiallyFailed : stderr=ignoring error for /.snapshot: UtimesNano: read-only file system

What steps did you take and what happened: I have a k8s cluster with velero+restic . I have taken backup of namespace ,which has persistent claim . Velero backup took backup properly with Completed status. As a next step to restore in same cluster. I have deleted the namespace and tried to restore from S3 object storage. Its in PartiallyFailed state .

  • k delete ns <namespace>
  • velero restore create --from-backup data-1

restore_describe_data-1-20221108160411.txt

Name: data-1-20221108160411 Namespace: velero Labels: <none> Annotations: <none>

Phase: PartiallyFailed (run ‘velero restore logs data-1-20221108160411’ for more information) Total items to be restored: 8 Items restored: 8

Started: 2022-11-08 16:04:11 +0100 CET Completed: 2022-11-08 16:04:15 +0100 CET

What did you expect to happen:

In cluster NFS storage class is exported in RW .

Velero should restore the backup in proper manner.

velero restore logs

cat restore_data-1-20221108160411.log
f`or all restic restores to complete" logSource="pkg/restore/restore.go:551" restore=velero/data-1-20221108160411
time="2022-11-08T15:04:15Z" level=error msg="unable to successfully complete restic restores of pod's volumes" error="pod volume restore failed: error running restic restore, cmd=restic restore --repo=s3:s3-url:10443/s3-poc/restic/data --password-file=/tmp/credentials/velero/velero-restic-credentials-repository-password --cacert=/tmp/cacert-default3812448782 --cache-dir=/scratch/.cache/restic e95771eb --target=., stdout=restoring <Snapshot e95771eb of [/host_pods/605121c3-7da1-4e7d-846f-94e8f9228bad/volumes/kubernetes.io~nfs/pvc-9da69266-e025-4a64-abac-00a172106f29] at 2022-11-08 15:02:14.904234069 +0000 UTC by root@velero> to .\n, stderr=ignoring error for /.snapshot: UtimesNano: read-only file system\nFatal: There were 1 errors\n\n: exit status 1" logSource="pkg/restore/restore.go:1579" restore=velero/data-1-20221108160411


#kubectl logs deployment/velero -n velero

time="2022-11-09T11:57:36Z" level=info msg="Validating BackupStorageLocation" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:131"
time="2022-11-09T11:57:36Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:116"
time="2022-11-09T11:58:36Z" level=info msg="Validating BackupStorageLocation" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:131"
time="2022-11-09T11:58:36Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:116"
time="2022-11-09T11:59:36Z" level=info msg="Validating BackupStorageLocation" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:131"
time="2022-11-09T11:59:36Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:116"
time="2022-11-09T12:00:36Z" level=info msg="Validating BackupStorageLocation" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:131"
time="2022-11-09T12:00:36Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:116"
time="2022-11-09T12:01:36Z" level=info msg="Validating BackupStorageLocation" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:131"
time="2022-11-09T12:01:36Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:116"
# velero backup describe data-1 --details 


Name:         data-1
Namespace:    velero
Labels:       velero.io/storage-location=default
Annotations:  velero.io/source-cluster-k8s-gitversion=v1.21.14
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=21

Phase:  Completed

Errors:    0
Warnings:  0

Namespaces:
  Included:  data
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  <none>

Storage Location:  default

Velero-Native Snapshot PVs:  auto

TTL:  720h0m0s

Hooks:  <none>

Backup Format Version:  1.1.0

Started:    2022-11-08 16:02:01 +0100 CET
Completed:  2022-11-08 16:02:16 +0100 CET

Expiration:  2022-12-08 16:02:01 +0100 CET

Total items to be backed up:  24
Items backed up:              24

Resource List:
  v1/ConfigMap:
    - data/istio-ca-root-cert
    - data/kube-root-ca.crt
  v1/Event:
    - data/task-pv-claim.1725a3d09fefd735
    - data/task-pv-claim.1725a3e4b6cd9633
    - data/task-pv-claim.1725a3e4b7400472
    - data/task-pv-claim.1725a3e4b7cebed9
    - data/task-pv-pod.1725a3d8255f92ff
    - data/task-pv-pod.1725a3e198697c42
    - data/task-pv-pod.1725a3e730906a34
    - data/task-pv-pod.1725a3e76a4b693b
    - data/task-pv-pod.1725a3e8dcbc5221
    - data/task-pv-pod.1725a3e8dcbcb033
    - data/task-pv-pod.1725a3e8e28123e7
    - data/task-pv-pod.1725a3e8e2814efd
    - data/task-pv-pod.1725a3fa711d8ff4
    - data/task-pv-pod.1725a3fa7ed85e06
    - data/task-pv-pod.1725a3fa7fb66ba9
    - data/task-pv-pod.1725a3fa847bca60
  v1/Namespace:
    - data
  v1/PersistentVolume:
    - pvc-9da69266-e025-4a64-abac-00a172106f29
  v1/PersistentVolumeClaim:
    - data/task-pv-claim
  v1/Pod:
    - data/task-pv-pod
  v1/Secret:
    - data/default-token-tfp4k
  v1/ServiceAccount:
    - data/default

Velero-Native Snapshots: <none included>

Restic Backups:
  Completed:
    data/task-pv-pod: task-pv-storage

Anything else you would like to add:

Having NFS as storage class and it is exported as RW .

Environment:

velero version Client: Version: v1.9.2 Git commit: 82a100981cc66d119cf9b1d121f45c5c9dcf99e1 Server: Version: v1.9.2

  • velero client config get features features: <NOT SET>
  • Kubernetes version Client Version: version.Info{Major:“1”, Minor:“21”, GitVersion:“v1.21.0” Server Version: version.Info{Major:“1”, Minor:“21”, GitVersion:“v1.21.14”

Vote on this issue!

This is an invitation to the Velero community to vote on issues, you can see the project’s top voted issues listed here.
Use the “reaction smiley face” up to the right of this comment to vote.

  • 👍 for “I would like to see this bug fixed as soon as possible”
  • 👎 for “There are more important bugs to focus on right now”

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Comments: 15 (7 by maintainers)

Most upvoted comments

Considering above, there are two solutions:

  1. Velero skip the .snapshot directory at the time of backup. At present, Velero doesn’t support this
  2. Hide the .snapshot directory from NFS client. I found a netapp doc for this topic

approach 2 save me, thx!!!

Considering above, there are two solutions:

  1. Velero skip the .snapshot directory at the time of backup. At present, Velero doesn’t support this
  2. Hide the .snapshot directory from NFS client. I found a netapp doc for this topic