velero: kops master is unauthorized to attach ark-restored volumes
Hi, I’m trying to solve the nginx use-case but using S3 for object storage.
I’m trying to restore a backup which is created by running this command ark backup create nginx-backup --selector app=nginx --snapshot-volumes
. The command used for restoring is ark restore create nginx-backup --restore-volumes
.
The backup however is being created successfully and the backup files are uploaded to the object storage and the snapshot is getting created. The issue that I’m facing is that it is pointing to the same PV while restoring in a different k8s cluster. And the pod that is supposed to restore is struck in STATUS ContainerCreating
. Is there anyway that I can get it to create a new PV while restoring in a different cluster?
Output of ark backup describe nginx-backup
Name: nginx-backup
Namespace: heptio-ark
Labels: <none>
Annotations: <none>
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: app=nginx
Snapshot PVs: true
TTL: 720h0m0s
Hooks: <none>
Phase: Completed
Backup Format Version: 1
Expiration: 2018-04-07 19:10:11 +0000 UTC
Validation errors: <none>
Persistent Volumes:
pvc-2db45bb0-22f8-11e8-82f2-0e21f011a24c:
Snapshot ID: snap-0c9aad251b280516d
Type: gp2
Availability Zone: us-east-1a
IOPS: <N/A>
Output of ark restore describe nginx-backup-20180308194129
Name: nginx-backup-20180308194129
Namespace: heptio-ark
Labels: <none>
Annotations: <none>
Backup: nginx-backup
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: nodes
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: true
Phase: Completed
Validation errors: <none>
Warnings: <none>
Errors: <none>
yaml file used to create nginx-example
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx-example
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: nginx-logs
persistentVolumeClaim:
claimName: nginx-logs
containers:
- image: nginx:1.7.9
name: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/var/log/nginx"
name: nginx-logs
readOnly: false
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: my-nginx
namespace: nginx-example
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
NOTE: I’m using the k8s nodes iam instance profile to provide EC2 and S3 access to the ark server instead of a secret as mentioned in the example use-case.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 34 (24 by maintainers)
If you’re using an IAM policy for Ark in AWS, make sure you add
ec2:DescribeSnapshots
to the policy.Hey @ncdc,
Yes, the two clusters are in the same region
us-east-1
and the same account as well.Output of
kubectl -n nginx-example describe pod