velero: When executing velero backup delete, object data of ${bucket_name}/kopia/${namespace} cannot be deleted from s3 (minio).

What steps did you take and what happened: When executing velero backup delete, object data of ${bucket_name}/kopia/${namespace} cannot be deleted from s3 (minio). I confirmed that it can deleted backup/20231002-1656-kamap2 from s3(minio). This problem happen in kopia side.

Yesterday, I opend new discussion “https://github.com/vmware-tanzu/velero/discussions/6910” and shubham-pampattiwar say that it similler to “https://github.com/vmware-tanzu/velero/issues/6575”. But, I seem to this code is already merged in velero 1.12.0. So, I opened this ticket.

What did you expect to happen: It should delete the PV/snapshot data under kopia directory from s3(minio).

The following information will help us better understand what’s going on: I used new parameter “–snapshot-move-data” for backup.

$ velero backup create 20231002-1656-kamap2 --snapshot-move-data --selector velerobackup=enable --namespace kube-storage --include-namespaces kamap2

When I executed velero backup delete, I confirmed that only the backuprepository remained.

$ velero repo get --namespace=kube-storage kamap2-default-kopia-5mmvt
NAME                         STATUS   LAST MAINTENANCE
kamap2-default-kopia-5mmvt   Ready    2023-10-04 10:00:40 +0900 JST
$ kubectl get backuprepository -n kube-storage kamap2-default-kopia-5mmvt -o yaml | kubectl neat
apiVersion: velero.io/v1
kind: BackupRepository
metadata:
  labels:
    velero.io/repository-type: kopia
    velero.io/storage-location: default
    velero.io/volume-namespace: kamap2
  name: kamap2-default-kopia-5mmvt
  namespace: kube-storage
spec:
  backupStorageLocation: default
  maintenanceFrequency: 1h0m0s
  repositoryType: kopia
  resticIdentifier: s3:https://ike-minio4500.xxx.xxx.net/sri-dev-primera/restic/kamap2
  volumeNamespace: kamap2

Anything else you would like to add:

  • How to reproduce
##### I I used new parameter "--snapshot-move-data" for backup.
$ velero backup create 20231002-1656-kamap2 --snapshot-move-data --selector velerobackup=enable --namespace kube-storage --include-namespaces kamap2


##### I deleted backup resource by "20231002-1656-kamap2".
$ velero backupo delete 20231002-1656-kamap2 -n kube-storage

##### I confirmed that it was deleted.
$ velero backup get --namespace=kube-storage
NAME                                               STATUS      ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
20230929-163700-kongskgr                           Completed   0        0          2023-09-29 16:38:00 +0900 JST   25d       default            velerobackup=enable
20230929-171100-kongskgr                           Completed   0        0          2023-09-29 17:11:45 +0900 JST   25d       default            velerobackup=enable
kong-customplugin-backup                           Completed   0        0          2023-09-25 17:08:37 +0900 JST   21d       default            velerobackup=enable
test-schedule-kamada-202310021818-20231002093043   Completed   0        0          2023-10-02 18:30:43 +0900 JST   29d       default            velerobackup=enable


##### I confirmed that backuprepository resource is existing with the name of "kamap2-default-kopia-5mmvt".
$ velero repo get --namespace=kube-storage
NAME                                    STATUS   LAST MAINTENANCE
kamada-default-kopia-gx92k              Ready    2023-10-03 16:55:40 +0900 JST
kamap2-default-kopia-5mmvt              Ready    2023-10-03 17:00:40 +0900 JST
kong-customplugin-default-kopia-zmgcs   Ready    2023-10-03 17:30:40 +0900 JST



##### I confirmed velero log that it was delete 20231002-1656-kamap2. But I found that some objects show error under the kopia/20231002-1656-kamap2 directory.
time="2023-10-02T08:03:30Z" level=info msg="Removing existing deletion requests for backup" backup=20231002-1656-kamap2 controller=backup-deletion deletebackuprequest=kube-storage/20231002-1656-kamap2-db5j7 logSource="pkg/controller/backup_deletion_controller.go:483"
time="2023-10-02T08:03:30Z" level=info msg="invoking DeleteItemAction plugins" item=20231002-1656-kamap2-7phnl logSource="internal/delete/delete_item_action_handler.go:116" namespace=kube-storage
time="2023-10-02T08:03:30Z" level=info msg="Executing DataUploadDeleteAction" backup=20231002-1656-kamap2 cmd=/velero controller=backup-deletion deletebackuprequest=kube-storage/20231002-1656-kamap2-db5j7 logSource="pkg/datamover/dataupload_delete_action.go:33" pluginName=velero
time="2023-10-02T08:03:31Z" level=info msg="Starting to check for items in namespace" logSource="internal/delete/delete_item_action_handler.go:100" namespace=kamap2
time="2023-10-02T08:03:31Z" level=info msg="invoking DeleteItemAction plugins" item=deployment-mount300 logSource="internal/delete/delete_item_action_handler.go:116" namespace=kamap2
time="2023-10-02T08:03:31Z" level=info msg="invoking DeleteItemAction plugins" item=kamap2 logSource="internal/delete/delete_item_action_handler.go:116" namespace=
time="2023-10-02T08:03:31Z" level=info msg="Starting to check for items in namespace" logSource="internal/delete/delete_item_action_handler.go:100" namespace=kamap2
time="2023-10-02T08:03:31Z" level=info msg="invoking DeleteItemAction plugins" item=my-first-pvc300-velero logSource="internal/delete/delete_item_action_handler.go:116" namespace=kamap2
time="2023-10-02T08:03:31Z" level=info msg="Removing PV snapshots" backup=20231002-1656-kamap2 controller=backup-deletion deletebackuprequest=kube-storage/20231002-1656-kamap2-db5j7 logSource="pkg/controller/backup_deletion_controller.go:293"
time="2023-10-02T08:03:31Z" level=info msg="Removing pod volume snapshots" backup=20231002-1656-kamap2 controller=backup-deletion deletebackuprequest=kube-storage/20231002-1656-kamap2-db5j7 logSource="pkg/controller/backup_deletion_controller.go:318"
time="2023-10-02T08:03:31Z" level=info msg="Removing snapshot data by data mover" backup=20231002-1656-kamap2 controller=backup-deletion deletebackuprequest=kube-storage/20231002-1656-kamap2-db5j7 logSource="pkg/controller/backup_deletion_controller.go:326"
time="2023-10-02T08:03:31Z" level=info msg="Founding existing repo" backupLocation=default logSource="pkg/repository/ensurer.go:85" repositoryType=kopia volumeNamespace=kamap2
time="2023-10-02T08:03:32Z" level=info msg="Deleted snapshot 90c17bf3125d2c1bfd7585beea09f8d8, namespace: kamap2, repo type: kopia" logSource="pkg/controller/backup_deletion_controller.go:555"
time="2023-10-02T08:03:32Z" level=info msg="Removing local datauploads" backup=20231002-1656-kamap2 controller=backup-deletion deletebackuprequest=kube-storage/20231002-1656-kamap2-db5j7 logSource="pkg/controller/backup_deletion_controller.go:333"
time="2023-10-02T08:03:32Z" level=info msg="Removing backup from backup storage" backup=20231002-1656-kamap2 controller=backup-deletion deletebackuprequest=kube-storage/20231002-1656-kamap2-db5j7 logSource="pkg/controller/backup_deletion_controller.go:353"
time="2023-10-02T08:03:33Z" level=info msg="Removing restores" backup=20231002-1656-kamap2 controller=backup-deletion deletebackuprequest=kube-storage/20231002-1656-kamap2-db5j7 logSource="pkg/controller/backup_deletion_controller.go:359"
time="2023-10-02T08:03:33Z" level=info msg="Reconciliation done" backup=20231002-1656-kamap2 controller=backup-deletion deletebackuprequest=kube-storage/20231002-1656-kamap2-db5j7 logSource="pkg/controller/backup_deletion_controller.go:446"


----- snip

time="2023-10-03T08:00:40Z" level=info msg="Running maintenance on backup repository" backupRepo=kube-storage/kamap2-default-kopia-5mmvt logSource="pkg/controller/backup_repository_controller.go:285"
time="2023-10-03T08:00:41Z" level=warning msg="active indexes [xn0_02dcbd0f786db8b67ca39513209f22a2-s20197d85c6bd7e85121-c1 xn0_72efdaeab14f9ea3f08c14d266a0cb49-sb8453497be0c47b5121-c1 xn0_b384096e694dc3db59aee2a7a71e2ba1-s7ddd096a2027a547121-c1] deletion watermark 0001-01-01 00:00:00 +0000 UTC" logModule=kopia/kopia/format logSource="pkg/kopia/kopia_log.go:101" sublevel=error
time="2023-10-03T08:00:41Z" level=warning msg="active indexes [xn0_02dcbd0f786db8b67ca39513209f22a2-s20197d85c6bd7e85121-c1 xn0_72efdaeab14f9ea3f08c14d266a0cb49-sb8453497be0c47b5121-c1 xn0_b384096e694dc3db59aee2a7a71e2ba1-s7ddd096a2027a547121-c1] deletion watermark 0001-01-01 00:00:00 +0000 UTC" logModule=kopia/kopia/format logSource="pkg/kopia/kopia_log.go:101" sublevel=error
time="2023-10-03T08:00:41Z" level=info msg="Running quick maintenance..." logModule=kopia/maintenance logSource="pkg/kopia/kopia_log.go:94"
time="2023-10-03T08:00:41Z" level=info msg="Running quick maintenance..." logModule=kopia/kopia/format logSource="pkg/kopia/kopia_log.go:94"
time="2023-10-03T08:00:41Z" level=warning msg="active indexes [xn0_02dcbd0f786db8b67ca39513209f22a2-s20197d85c6bd7e85121-c1 xn0_72efdaeab14f9ea3f08c14d266a0cb49-sb8453497be0c47b5121-c1 xn0_b384096e694dc3db59aee2a7a71e2ba1-s7ddd096a2027a547121-c1] deletion watermark 0001-01-01 00:00:00 +0000 UTC" logModule=kopia/kopia/format logSource="pkg/kopia/kopia_log.go:101" sublevel=error
time="2023-10-03T08:00:41Z" level=info msg="Finished quick maintenance." logModule=kopia/maintenance logSource="pkg/kopia/kopia_log.go:94"
time="2023-10-03T08:00:41Z" level=info msg="Finished quick maintenance." logModule=kopia/kopia/format logSource="pkg/kopia/kopia_log.go:94"
time="2023-10-03T08:01:37Z" level=info msg="Validating BackupStorageLocation" backup-storage-location=kube-storage/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:152"
time="2023-10-03T08:01:38Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=kube-storage/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:137"
I1003 08:01:56.843944       1 request.go:690] Waited for 1.045119366s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/hnc.x-k8s.io/v1alpha2?timeout=32s
time="2023-10-03T08:02:47Z" level=info msg="Validating BackupStorageLocation" backup-storage-location=kube-storage/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:152"

It can’t delete kopia’s object storage from s3(minio). 202310031815-cantdelete-kopia-object

It can delete well backup/20231002-1656-kamap2 from s3(minio). 202310031815can-delete-kamap2backup-resource

Environment:

  • Velero version (use 1.12.0):
  • Velero features (use NOT SET):
  • Kubernetes version (use v1.26.6):
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from Ubuntu 20.04.1 LTS):

Vote on this issue!

This is an invitation to the Velero community to vote on issues, you can see the project’s top voted issues listed here.
Use the “reaction smiley face” up to the right of this comment to vote.

  • 👍 for “I would like to see this bug fixed as soon as possible”
  • 👎 for “There are more important bugs to focus on right now”

About this issue

  • Original URL
  • State: closed
  • Created 9 months ago
  • Comments: 20 (7 by maintainers)

Most upvoted comments

@danfengliu @Lyndon-Li

I ran the command below to confirm that the snapshot data was already deleted and understood what the comment meant.

The deletion of the backup won’t delete the backup repository, and even you delete the backup repository CR, Velero won’t empty the object store, the leftovers should be manually removed.

If we need delete, we will delete manually.

But,@jack-nix still write comment, I don’t still close this ticket.

$ ./kopia repository connect s3 --endpoint xxx-minio4500.xxx.xxx.net --bucket sri-dev-primera --access-key $ACCESS_KEY --secret-access-key $SECRET_KEY --disable-tls-verification --prefix=kopia/kamap2/
Enter password to open repository:

Connected to repository.

NOTICE: Kopia will check for updates on GitHub every 7 days, starting 24 hours after first use.
To disable this behavior, set environment variable KOPIA_CHECK_FOR_UPDATES=false
Alternatively you can remove the file "/home/kamada/.config/kopia/repository.config.update-info.json".
$ ./kopia manifest list
d9d992e9598792553cc28290567ca697        213 2023-10-02 16:57:49 JST type:maintenance
cba834e90d5f32912f903965822e6376        375 2023-10-02 16:57:51 JST type:policy hostname:default path:snapshot-data-upload-download/kopia/kamap2/my-first-pvc300-velero policyType:path username:default

$ ./kopia snapshot list

Regards, Aki

@aki-kamada The files in the object store is managed by the repository, the content of the file may not be data but the repository’s metadata, so it will have its own lifecycle different from the backup data. The deletion of the backup data may or may not affect the repository metadata, it is totally controlled by the repository itself.

And for the sub-level=error, it is as the expectation, you can ignore it.

The deletion of the backup won’t delete the backup repository, and even you delete the backup repository CR, Velero won’t empty the object store, the leftovers should be manually removed.

This is what backup deletion does:

  1. Delete the items generated for resource backup, that is under the backups/<backup name> folder in the object store
  2. Delete the repo snapshot for volume data backup

For 2, the repo snapshot is a root reference to the volume data backed up. Velero delete this reference only. When the reference is deleted, the backed up data turn to orphan, then the repository delete the data according to its own GC policies. This mean, the backed up volume data is probably not deleted immediately after Velero backup is deleted. How and when is the data deleted varying from different repositories. At present, Restic and Kopia are the repositories used by Velero’s Unified Repository. For Kopia, the data is deleted by Full Maintenance happening every 24 hours; for Restic, the prune operation is for the data deletion, for Velero the prune happens every 7*24 hours (by default).

Furthermore, even though the backed up volume data is deleted, it doesn’t mean you see nothing in the object store. The items in the object store are files managed by repositories, they are not pure data, the repository has its own way to manage the files. What you can see is the object store size is reduced.

@danfengliu I think i’m facing the same issue. Backup and restore with snapshot seems to work as intended, the problem is that after the backup is deleted, I would expect the snapshot files to be deleted as well.

If I look at Minio UI though, I can see that files under kopia/<namespace> folder are still there. Also folder in backup/<namespace> is not deleted, but at least the folder is empty.

Thanks for report this issue! Let me reproduce it first.