aws-ebs-csi-driver: Example IAM policy is insufficient
/kind bug
What happened? I was following this guide, and when my pod attempted to use a restored PVC (with snapshot datasource), I’ve got following error
0s Warning ProvisioningFailed persistentvolumeclaim/mongo6-mongod-persistent-storage-claim-mongo6-mongod-0 failed to provision volume with StorageClass "ssd-xfs": rpc error: code = Internal desc = Could not create volume "pvc-29a86a12-d64c-4ffe-b799-a63209267737": failed to get an available volume in EC2: InvalidVolume.NotFound: The volume 'vol-04da06270c9fd721e' does not exist.
status code: 400, request id: 4f9dfe64-23dd-428f-8fbc-15b5a84bb444
What you expected to happen? I expected PVC to be created and bound successfully
How to reproduce it (as minimally and precisely as possible)? Just follow the AWS guide: https://aws.amazon.com/blogs/containers/using-ebs-snapshots-for-persistent-storage-with-your-eks-cluster/
Anything else we need to know?: I suspect this is not-sufficient-permissions problem. I’ve used this IAM policy After I’ve added the entire universe to my permission list (as below) I was able to create and restore snapshots successfully
{
Effect: "Allow",
Action: [
"*"
],
Resource: "*"
},
Environment
- Kubernetes version (use
kubectl version):
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.4-eks-6b7464", GitCommit:"6b746440c04cb81db4426842b4ae65c3f7035e53", GitTreeState:"clean", BuildDate:"2021-03-19T19:33:03Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
- Driver version: 1.0.0 (Helm release)
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 2
- Comments: 17 (5 by maintainers)
I had the same issue and turns out that it was related to KMS permission, it was using the default KMS but the role had no access to it, so I added the missing permissions to the example policy.
Perhaps setting
encryptedparameter of the Storage Class to false might be enough for some.Note: I’m using terraform to replace variables.
Helm Values
Policy:
Can anyone point to what key actually gets selected by the provisioner when only
encrypted: trueis specified? I’m having a hard time finding a documented behavior here and the clusters I have seem to randomly select KMS keys.Same issue here, working out of the box with
encrypted: trueusing the default KMS but not with custom KMS