velero: Timeouts Waiting for Download URL

What steps did you take and what happened: Attempting to access detailed status of a backup, or review the logs.

Timeout’s are issued.

What did you expect to happen: Both to be displayed as required.

The output of the following commands will help us better understand what’s going on:

 velero backup describe daily-backup-c006-20191112133437 --details
Name:         daily-backup-c006-20191112133437
Namespace:    velero
Labels:       velero.io/backup=daily-backup-c006-20191112133437
              velero.io/pv=pvc-5ce737ae-fd4c-11e9-8c44-42010a3ce057
              velero.io/schedule-name=daily-backup-c006
              velero.io/storage-location=default
Annotations:  <none>

Phase:  PartiallyFailed (run `velero backup logs daily-backup-c006-20191112133437` for more information)

Errors:    1
Warnings:  0

Namespaces:
  Included:  *
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  <none>

Storage Location:  default

Snapshot PVs:  auto

TTL:  120h0m0s

Hooks:  <none>

Backup Format Version:  1

Started:    2019-11-12 06:34:37 -0700 MST
Completed:  2019-11-12 06:35:07 -0700 MST

Expiration:  2019-11-17 06:34:37 -0700 MST

Resource List:  <error getting backup resource list: timed out waiting for download URL>

Persistent Volumes:  <error getting volume snapshot info: timed out waiting for download URL>

Restic Backups:
  Completed:
    database/postgresql-postgresql-0: data
  Failed:
    database/mariadb-0: data

And

 velero backup logs daily-backup-c006-20191112133437
An error occurred: timed out waiting for download URL

Anything else you would like to add: This a private GKE cluster.

Environment:

velero version
Client:
        Version: v1.2.0
        Git commit: 5d008491bbf681658d3e372da1a9d3a21ca4c03c
Server:
        Version: v1.2.0
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.9", GitCommit:"6c1e92d07f5717440f751666d4aad6943015d3cb", GitTreeState:"clean", BuildDate:"2019-10-11T23:14:17Z", GoVersion:"go1.11.13b4", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: GKE (Private)

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 28 (9 by maintainers)

Most upvoted comments

For future searchers: I had this issue on a GKE private cluster with workload identity enabled as well. Was able to use IAM audit logging to suss out that the velero service account needed the Service Account Token Creator role, because it seems to use the service accounts signBlob API.

I’ve got the same problem on a GKE cluster (private node, master with authorized network). I add these permissions :

  • iam.serviceAccounts.signBlob

It works now.

Hmm, seems like maybe there are some orphaned invalid requests in there. Can you kubectl -n velero delete downloadrequests.velero.io --all, then try running a logs or describe --details command again?