velero: Velero+restic backup gets stuck when NFS persistent volume is included
What steps did you take and what happened: I am testing Velero to backup my application.
In a minikube deployment, I followed the basic mysql + wordpress tutorial here: https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
I setup an NFS server in a VM and changed the example to use an NFS persistent volume instead.
The mysql pod config I used is the following:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-nfs-pv
labels:
app: wordpress
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.120.177
path: /mnt/data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-nfs-pvc
labels:
app: wordpress
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-nfs-pvc
The example also has a wordpress-deployment.yaml file and kustomization.yaml file which I did not modify.
After deployment:
kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node-7bf657c596-xbghl 1/1 Running 0 3d
wordpress-5bbd7fd785-b87zj 1/1 Running 1 118s
wordpress-mysql-5dcc45d9f9-l4fzs 1/1 Running 0 118s
$ kubectl describe pod wordpress-mysql-5dcc45d9f9-l4fzs
Name: wordpress-mysql-5dcc45d9f9-l4fzs
Namespace: default
Priority: 0
Node: minikube/192.168.120.166
Start Time: Fri, 17 Jul 2020 17:28:47 -0700
Labels: app=wordpress
pod-template-hash=5dcc45d9f9
tier=mysql
Annotations: <none>
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
Controlled By: ReplicaSet/wordpress-mysql-5dcc45d9f9
Containers:
mysql:
Container ID: docker://60ca381a170fb123f0e3851382ff39bbae12fb76cd50dee8e4ff479c1793882a
Image: mysql:5.6
Image ID: docker-pullable://mysql@sha256:19a164794d3cef15c9ac44754604fa079adb448f82d40e4b8be8381148c785fa
Port: 3306/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 17 Jul 2020 17:28:48 -0700
Ready: True
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'password' in secret 'mysql-pass-c57bb4t7mf'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-q77zk (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-nfs-pvc
ReadOnly: false
default-token-q77zk:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-q77zk
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m33s (x2 over 2m33s) default-scheduler persistentvolumeclaim "mysql-nfs-pvc" not found
Normal Scheduled 2m31s default-scheduler Successfully assigned default/wordpress-mysql-5dcc45d9f9-l4fzs to minikube
Normal Pulled 2m30s kubelet, minikube Container image "mysql:5.6" already present on machine
Normal Created 2m30s kubelet, minikube Created container mysql
Normal Started 2m30s kubelet, minikube Started container mysql
The S3 endpoint is a vanilla Minio S3 server. I installed velero with:
$ velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.0.0 --bucket backup --secret-file ./credentials-velero --use-volume-snapshots=false --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://172.27.254.245:9000
Started a backup (without annotation) and things work perfectly for metadata:
$ velero backup create mysql-backup-no-nfs
$ velero backup describe mysql-backup-no-nfs
Name: mysql-backup-no-nfs
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/source-cluster-k8s-gitversion=v1.18.3
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=18
Phase: Completed
Errors: 0
Warnings: 0
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Storage Location: default
Velero-Native Snapshot PVs: auto
TTL: 720h0m0s
Hooks: <none>
Backup Format Version: 1
Started: 2020-07-17 17:30:02 -0700 PDT
Completed: 2020-07-17 17:30:09 -0700 PDT
Expiration: 2020-08-16 17:30:02 -0700 PDT
Total items to be backed up: 427
Items backed up: 427
Velero-Native Snapshots: <none included>
However after adding the annotation the backup gets stuck:
$ kubectl -n default annotate pod/wordpress-mysql-5dcc45d9f9-l4fzs backup.velero.io/backup-volumes=mysql-persistent-storage
pod/wordpress-mysql-5dcc45d9f9-l4fzs annotated
$ velero backup create mysql-backup-with-nfs
Backup request "mysql-backup-with-nfs" submitted successfully.
$ velero backup describe mysql-backup-with-nfs
Name: mysql-backup-with-nfs
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/source-cluster-k8s-gitversion=v1.18.3
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=18
Phase: InProgress
Errors: 0
Warnings: 0
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Storage Location: default
Velero-Native Snapshot PVs: auto
TTL: 720h0m0s
Hooks: <none>
Backup Format Version: 1
Started: 2020-07-17 17:33:38 -0700 PDT
Completed: <n/a>
Expiration: 2020-08-16 17:33:38 -0700 PDT
Estimated total items to be backed up: 410
Items backed up so far: 4
Velero-Native Snapshots: <none included>
Restic Backups (specify --details for more information):
New: 1
These are the last messages in the log:
$ kubectl logs deployment/velero -n velero
...
time="2020-07-18T00:33:40Z" level=info msg="Backed up 4 items out of an estimated total of 410 (estimate will change throughout the backup)" backup=velero/mysql-backup-with-nfs logSource="pkg/backup/backup.go:411" name=wordpress-5bbd7fd785-b87zj namespace=default progress= resource=pods
time="2020-07-18T00:33:40Z" level=info msg="Processing item" backup=velero/mysql-backup-with-nfs logSource="pkg/backup/backup.go:371" name=wordpress-mysql-5dcc45d9f9-l4fzs namespace=default progress= resource=pods
time="2020-07-18T00:33:40Z" level=info msg="Backing up item" backup=velero/mysql-backup-with-nfs logSource="pkg/backup/item_backupper.go:120" name=wordpress-mysql-5dcc45d9f9-l4fzs namespace=default resource=pods
time="2020-07-18T00:33:40Z" level=info msg="Executing custom action" backup=velero/mysql-backup-with-nfs logSource="pkg/backup/item_backupper.go:326" name=wordpress-mysql-5dcc45d9f9-l4fzs namespace=default resource=pods
time="2020-07-18T00:33:40Z" level=info msg="Executing podAction" backup=velero/mysql-backup-with-nfs cmd=/velero logSource="pkg/backup/pod_action.go:51" pluginName=velero
time="2020-07-18T00:33:40Z" level=info msg="Adding pvc mysql-nfs-pvc to additionalItems" backup=velero/mysql-backup-with-nfs cmd=/velero logSource="pkg/backup/pod_action.go:67" pluginName=velero
time="2020-07-18T00:33:40Z" level=info msg="Done executing podAction" backup=velero/mysql-backup-with-nfs cmd=/velero logSource="pkg/backup/pod_action.go:77" pluginName=velero
time="2020-07-18T00:33:40Z" level=info msg="Backing up item" backup=velero/mysql-backup-with-nfs logSource="pkg/backup/item_backupper.go:120" name=mysql-nfs-pvc namespace=default resource=persistentvolumeclaims
time="2020-07-18T00:33:40Z" level=info msg="Executing custom action" backup=velero/mysql-backup-with-nfs logSource="pkg/backup/item_backupper.go:326" name=mysql-nfs-pvc namespace=default resource=persistentvolumeclaims
time="2020-07-18T00:33:40Z" level=info msg="Executing PVCAction" backup=velero/mysql-backup-with-nfs cmd=/velero logSource="pkg/backup/backup_pv_action.go:49" pluginName=velero
time="2020-07-18T00:33:40Z" level=info msg="Backing up item" backup=velero/mysql-backup-with-nfs logSource="pkg/backup/item_backupper.go:120" name=mysql-nfs-pv namespace= resource=persistentvolumes
time="2020-07-18T00:33:40Z" level=info msg="Executing takePVSnapshot" backup=velero/mysql-backup-with-nfs logSource="pkg/backup/item_backupper.go:404" name=mysql-nfs-pv namespace= resource=persistentvolumes
time="2020-07-18T00:33:40Z" level=info msg="Skipping snapshot of persistent volume because volume is being backed up with restic." backup=velero/mysql-backup-with-nfs logSource="pkg/backup/item_backupper.go:422" name=mysql-nfs-pv namespace= persistentVolume=mysql-nfs-pv resource=persistentvolumes
time="2020-07-18T00:33:40Z" level=info msg="Initializing restic repository" controller=restic-repository logSource="pkg/controller/restic_repository_controller.go:155" name=default-default-8rhfs namespace=velero
I enabled debug logs like described in https://velero.io/docs/master/troubleshooting/. Also re-deployed to clean state and pod name changed to wordpress-mysql-5dcc45d9f9-2zgkn.
$ date; velero backup create mysql-backup-with-nfs2
Fri Jul 17 17:40:48 PDT 2020
Backup request "mysql-backup-with-nfs2" submitted successfully.
Run `velero backup describe mysql-backup-with-nfs2` or `velero backup logs mysql-backup-with-nfs2` for more details.
$ kubectl logs deployment/velero -n velero
...
time="2020-07-18T00:40:50Z" level=info msg="Skipping snapshot of persistent volume because volume is being backed up with restic." backup=velero/mysql-backup-with-nfs2 logSource="pkg/backup/item_backupper.go:422" name=mysql-nfs-pv namespace= persistentVolume=mysql-nfs-pv resource=persistentvolumes
time="2020-07-18T00:40:50Z" level=debug msg="Executing post hooks" backup=velero/mysql-backup-with-nfs2 logSource="pkg/backup/item_backupper.go:204" name=mysql-nfs-pv namespace= resource=persistentvolumes
time="2020-07-18T00:40:50Z" level=debug msg="Resource persistentvolumes/mysql-nfs-pv, version= v1, preferredVersion=v1" backup=velero/mysql-backup-with-nfs2 logSource="pkg/backup/item_backupper.go:255" name=mysql-nfs-pv namespace= resource=persistentvolumes
time="2020-07-18T00:40:50Z" level=debug msg="Skipping action because it does not apply to this resource" backup=velero/mysql-backup-with-nfs2 logSource="pkg/backup/item_backupper.go:307" name=mysql-nfs-pvc namespace=default resource=persistentvolumeclaims
time="2020-07-18T00:40:50Z" level=debug msg="Executing post hooks" backup=velero/mysql-backup-with-nfs2 logSource="pkg/backup/item_backupper.go:204" name=mysql-nfs-pvc namespace=default resource=persistentvolumeclaims
time="2020-07-18T00:40:50Z" level=debug msg="Resource persistentvolumeclaims/mysql-nfs-pvc, version= v1, preferredVersion=v1" backup=velero/mysql-backup-with-nfs2 logSource="pkg/backup/item_backupper.go:255" name=mysql-nfs-pvc namespace=default resource=persistentvolumeclaims
time="2020-07-18T00:40:50Z" level=debug msg="Skipping action because it does not apply to this resource" backup=velero/mysql-backup-with-nfs2 logSource="pkg/backup/item_backupper.go:307" name=wordpress-mysql-5dcc45d9f9-2zgkn namespace=default resource=pods
time="2020-07-18T00:40:50Z" level=debug msg="Skipping action because it does not apply to this resource" backup=velero/mysql-backup-with-nfs2 logSource="pkg/backup/item_backupper.go:307" name=wordpress-mysql-5dcc45d9f9-2zgkn namespace=default resource=pods
time="2020-07-18T00:40:50Z" level=debug msg="Acquiring lock" backupLocation=default logSource="pkg/restic/repository_ensurer.go:122" volumeNamespace=default
time="2020-07-18T00:40:50Z" level=debug msg="Acquired lock" backupLocation=default logSource="pkg/restic/repository_ensurer.go:131" volumeNamespace=default
time="2020-07-18T00:40:50Z" level=debug msg="No repository found, creating one" backupLocation=default logSource="pkg/restic/repository_ensurer.go:151" volumeNamespace=default
time="2020-07-18T00:40:50Z" level=debug msg="Running processQueueItem" controller=restic-repository key=velero/default-default-tbrht logSource="pkg/controller/restic_repository_controller.go:110"
time="2020-07-18T00:40:50Z" level=info msg="Initializing restic repository" controller=restic-repository logSource="pkg/controller/restic_repository_controller.go:155" name=default-default-tbrht namespace=velero
time="2020-07-18T00:40:51Z" level=debug msg="Backup has not expired yet, skipping" backup=velero/mysql-backup-with-nfs2 controller=gc-controller expiration="2020-08-17 00:40:49 +0000 UTC" logSource="pkg/controller/gc_controller.go:127"
time="2020-07-18T00:40:51Z" level=debug msg="Ran restic command" command="restic snapshots --repo=s3:http://172.27.254.245:9000/backup/restic/default --password-file=/tmp/velero-restic-credentials-default827864671 --cache-dir=/scratch/.cache/restic --last" logSource="pkg/restic/repository_manager.go:291" repository=default stderr= stdout="created new cache in /scratch/.cache/restic\n"
time="2020-07-18T00:40:51Z" level=debug msg="Released lock" backupLocation=default logSource="pkg/restic/repository_ensurer.go:128" volumeNamespace=default
time="2020-07-18T00:41:12Z" level=debug msg="Checking for existing backup storage locations to sync into cluster" controller=backup-sync logSource="pkg/controller/backup_sync_controller.go:124"
time="2020-07-18T00:41:12Z" level=debug msg="Checking if backups need to be synced at this time for this location" backupLocation=default controller=backup-sync logSource="pkg/controller/backup_sync_controller.go:156"
time="2020-07-18T00:41:12Z" level=debug msg="Checking backup location for backups to sync into cluster" backupLocation=default controller=backup-sync logSource="pkg/controller/backup_sync_controller.go:163"
time="2020-07-18T00:41:12Z" level=debug msg="looking for plugin in registry" controller=backup-sync kind=ObjectStore logSource="pkg/plugin/clientmgmt/manager.go:99" name=velero.io/aws
time="2020-07-18T00:41:12Z" level=debug msg="creating new restartable plugin process" command=/plugins/velero-plugin-for-aws controller=backup-sync kind=ObjectStore logSource="pkg/plugin/clientmgmt/manager.go:114" name=velero.io/aws
time="2020-07-18T00:41:12Z" level=debug msg="starting plugin" args="[/plugins/velero-plugin-for-aws --log-level debug --features ]" cmd=/plugins/velero-plugin-for-aws controller=backup-sync logSource="pkg/plugin/clientmgmt/logrus_adapter.go:74" path=/plugins/velero-plugin-for-aws
time="2020-07-18T00:41:12Z" level=debug msg="plugin started" cmd=/plugins/velero-plugin-for-aws controller=backup-sync logSource="pkg/plugin/clientmgmt/logrus_adapter.go:74" path=/plugins/velero-plugin-for-aws pid=88
time="2020-07-18T00:41:12Z" level=debug msg="waiting for RPC address" cmd=/plugins/velero-plugin-for-aws controller=backup-sync logSource="pkg/plugin/clientmgmt/logrus_adapter.go:74" path=/plugins/velero-plugin-for-aws
time="2020-07-18T00:41:12Z" level=debug msg="plugin address" address=/tmp/plugin150964435 cmd=/plugins/velero-plugin-for-aws controller=backup-sync logSource="pkg/plugin/clientmgmt/logrus_adapter.go:74" network=unix pluginName=velero-plugin-for-aws
...
Full output is here: velero.txt
What did you expect to happen: Backup should either fail or succeed.
The output of the following commands will help us better understand what’s going on: (Pasting long output into a GitHub gist or other pastebin is fine.)
-
kubectl logs deployment/velero -n velero
velero.txt -
velero backup describe <backupname>
orkubectl get backup/<backupname> -n velero -o yaml
$ velero backup describe mysql-backup-with-nfs2
Name: mysql-backup-with-nfs2
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/source-cluster-k8s-gitversion=v1.18.3
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=18
Phase: InProgress
Errors: 0
Warnings: 0
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Storage Location: default
Velero-Native Snapshot PVs: auto
TTL: 720h0m0s
Hooks: <none>
Backup Format Version: 1
Started: 2020-07-17 17:40:49 -0700 PDT
Completed: <n/a>
Expiration: 2020-08-16 17:40:49 -0700 PDT
Estimated total items to be backed up: 443
Items backed up so far: 4
Velero-Native Snapshots: <none included>
Restic Backups (specify --details for more information):
New: 1
velero backup logs <backupname>
$ velero backup logs mysql-backup-with-nfs2
Logs for backup "mysql-backup-with-nfs2" are not available until it's finished processing. Please wait until the backup has a phase of Completed or Failed and try again.
velero restore describe <restorename>
orkubectl get restore/<restorename> -n velero -o yaml
NAvelero restore logs <restorename>
NA
Anything else you would like to add: NA
Environment:
- Velero version (use
velero version
):
$ velero version
Client:
Version: v1.4.2
Git commit: -
Server:
Version: v1.4.2
- Velero features (use
velero client config get features
):
$ velero client config get features
features: <NOT SET>
- Kubernetes version (use
kubectl version
):
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
- Kubernetes installer & version: minikube (vanilla)
- Cloud provider or hardware configuration: MacOs
- OS (e.g. from
/etc/os-release
): MacOs
THANKS!!
Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project’s top voted issues listed here.
Use the “reaction smiley face” up to the right of this comment to vote.
- 👍 for “I would like to see this bug fixed as soon as possible”
- 👎 for “There are more important bugs to focus on right now”
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 13
- Comments: 25 (4 by maintainers)
kubeadm + NFS server + Minio without
cacert
works fine.what I faced with
cacert
“restic repository is not ready” errorHello there the same things is happening to my openshift 4.3 version. Velero and restic is installed and running properly.
kubectl get pod -n velero NAME READY STATUS RESTARTS AGE restic-2zvfr 1/1 Running 0 6d22h restic-72lhw 1/1 Running 0 6d22h restic-7vzsh 1/1 Running 0 6d22h restic-8t24h 1/1 Running 0 6d22h restic-f6m5r 1/1 Running 0 6d22h restic-f9r47 1/1 Running 0 6d22h restic-h7nnh 1/1 Running 0 6d22h restic-lf65l 1/1 Running 0 6d22h restic-mqfg9 1/1 Running 0 6d22h restic-p6hwj 1/1 Running 0 6d22h restic-sgjhj 1/1 Running 0 6d22h velero-7d5db6bd4f-tsx28 1/1 Running 0 13d
I get this error if I check for errors in restic pod
kubectl logs restic-h7nnh -n velero | grep error time="2020-08-03T10:24:39Z" level=error msg="Error running command=restic backup --repo=s3:s3-u-west-2.amazonaws.com/velero-backup/restic/test4 --password-file=/tmp/velero-restic-credentials-test5556019 --cache-dir=/scratch/.cache/restic . --tag=pod-uid=934c-433a-b810-05f1ffb8 --tag=volume=cfg --tag=backup=help --tag=backup-uid=47c2-b322-589a09f967df --tag=ns=test4 --tag=pod=jarvis-api-2466m --host=velero --json, stdout=, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\nIs there a repository at the following location?\ns3:s3-u-west-2.amazonaws.com/velero-backup/restic/test4\n" backup=velero/help controller=pod-volume-backup error="unable to find summary in restic backup command output"
`velero backup describe 000-test4-okd-nprod --details Name: 000-test4-okd-nprod Namespace: velero Labels: velero.io/storage-location=default Annotations: velero.io/source-cluster-k8s-gitversion=v1.16.2 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=16+
Phase: PartiallyFailed (run
velero backup logs 000-test4-okd-nprod
for more information)Errors: 6 Warnings: 0
Namespaces: Included: test4 Excluded: <none>
Resources: Included: * Excluded: <none> Cluster-scoped: auto
Label selector: <none>
Storage Location: default
Velero-Native Snapshot PVs: auto
TTL: 720h0m0s
Hooks: <none>
Backup Format Version: 1
Started: 2020-08-06 11:19:12 +0000 UTC Completed: 2020-08-06 11:19:30 +0000 UTC
Expiration: 2020-09-05 11:19:12 +0000 UTC
Total items to be backed up: 176 Items backed up: 176
Resource List: apps/v1/ControllerRevision: - test4/logstash-65986dc4fb - test4/logstash-85894fbfc - test4/redis-6b475c8b5f - test4/redis-6f8c784ddf apps/v1/Deployment: - test4/api-node - test4/web-ui - test4/worker apps/v1/ReplicaSet: - test4/api-node-65d6b9dcb7 - test4/web-ui-575ccf9b95 - test4/worker-77fc978c87 apps/v1/StatefulSet: - test4/postgres-postgresql - test4/redis authorization.openshift.io/v1/RoleBinding: - test4/admin - test4/rb-test4 batch/v1/Job: - test4/reconcile-equity-1596676200 batch/v1beta1/CronJob: - test4/merge-equity extensions/v1beta1/Ingress: - test4/test4-wpl networking.k8s.io/v1/NetworkPolicy: - test4/jvs-access networking.k8s.io/v1beta1/Ingress: - test4/test4-wpl - test4/test4-wpl-internal rbac.authorization.k8s.io/v1/RoleBinding: - test4/admin - test4/rb-test4 - test4/system:deployers route.openshift.io/v1/Route: - test4/test4-wpl-6j6mz - test4/test4-wpl-6rp42 - test4/test4-wpl-z7zvj v1/ConfigMap: - test4/cj-patch - test4/k8s-cfg - test4/logstash-patterns v1/Endpoints: - test4/api-node-svc - test4/tezos-sig - test4/wpl-svc - test4/wpl-ui v1/Namespace: - test4 v1/PersistentVolume: - pvc-139781c1-b0e5-457-8e09-d32b800d434 - pvc-4208bc0b-770-4a70-be62-11d484b3b2d - pvc-52ddce34-e4a-42f-bd63-8c35bfeb3ad - pvc-54e1f97-a4a-4ed9-b2e2-aba001cd7c - pvc-62cab2c-cce3-480-b3cd-dcddec2db - test4/data-logstash-0 - test4/redis-data-redis-0 - test4/shared-logs - test4/tezos-logs - test4/var-test4 v1/Pod: - test4/console-ui-b9c4f5794-2xq55 - test4/mongo-test4-mongodb-primary-0 - test4/mongo-test4-mongodb-secondary-0 - test4/postgres-postgresql-0 - test4/worker-77fc978c87-zddgm v1/Secret: - test4/builder-dockercfg-bn97k - test4/builder-token-4jq6m - test4/builder-token-4ttrj v1/Service: - test4/api-node-svc - test4/tezos-api-svc v1/ServiceAccount: - test4/builder - test4/default - test4/deployer
Velero-Native Snapshots: <none included>
Restic Backups: Failed: test4/web-platform-5cf78f6b5-c6bm: certs, logs, run-sh, varfile, varshared`
Can someone give me some guidance? It seems that velero + restic can not backup the aws efs yet. Please comment here asap.