kiam: unable to assign pod the iam role
checked the kiam-agent logs on the node where the pod (which was to be assigned the iam role) was scheduled which look like
{"addr":"10.2.40.2:34938","headers":{},"level":"info","method":"GET","msg":"processed request","path":"/latest/meta-data/iam/security-credentials/","status":200,"time":"2018-03-15T07:02:32Z"}
{"level":"warning","msg":"error finding role for pod: rpc error: code = Unknown desc = pod not found","pod.ip":"10.2.40.2","time":"2018-03-15T07:02:32Z"}
{"addr":"10.2.40.2:34940","level":"error","method":"GET","msg":"error processing request: error checking assume role permissions: rpc error: code = Unknown desc = pod not found","path":"/latest/meta-data/iam/security-credentials/my-app-iam-role","status":500,"time":"2018-03-15T07:02:32Z"}
{"addr":"10.2.40.2:34940","headers":{"Content-Type":["text/plain; charset=utf-8"],"X-Content-Type-Options":["nosniff"]},"level":"info","method":"GET","msg":"processed request","path":"/latest/meta-data/iam/security-credentials/my-app-iam-role","status":500,"time":"2018-03-15T07:02:32Z"}
{"addr":"10.2.40.2:34942","headers":{},"level":"info","method":"GET","msg":"processed request","path":"/latest/meta-data/iam/security-credentials/","status":200,"time":"2018-03-15T07:02:33Z"}
{"level":"warning","msg":"error finding role for pod: rpc error: code = Unknown desc = pod not found","pod.ip":"10.2.40.2","time":"2018-03-15T07:02:33Z"}
{"addr":"10.2.40.2:34944","level":"error","method":"GET","msg":"error processing request: error checking assume role permissions: rpc error: code = Unknown desc = pod not found","path":"/latest/meta-data/iam/security-credentials/my-app-iam-role","status":500,"time":"2018-03-15T07:02:33Z"}
{"addr":"10.2.40.2:34944","headers":{"Content-Type":["text/plain; charset=utf-8"],"X-Content-Type-Options":["nosniff"]},"level":"info","method":"GET","msg":"processed request","path":"/latest/meta-data/iam/security-credentials/my-app-iam-role","status":500,"time":"2018-03-15T07:02:33Z"}
{"addr":"10.2.40.2:34946","headers":{},"level":"info","method":"GET","msg":"processed request","path":"/latest/meta-data/iam/security-credentials/","status":200,"time":"2018-03-15T07:02:33Z"}
{"level":"warning","msg":"error finding role for pod: rpc error: code = Unknown desc = pod not found","pod.ip":"10.2.40.2","time":"2018-03-15T07:02:33Z"}
{"addr":"10.2.40.2:34948","level":"error","method":"GET","msg":"error processing request: error checking assume role permissions: rpc error: code = Unknown desc = pod not found","path":"/latest/meta-data/iam/security-credentials/my-app-iam-role","status":500,"time":"2018-03-15T07:02:33Z"}
{"addr":"10.2.40.2:34948","headers":{"Content-Type":["text/plain; charset=utf-8"],"X-Content-Type-Options":["nosniff"]},"level":"info","method":"GET","msg":"processed request","path":"/latest/meta-data/iam/security-credentials/my-app-iam-role","status":500,"time":"2018-03-15T07:02:33Z"}
running uswitch/kiam:v2.4 on both the agent and the server.
The namespace where the pod is scheduled has the annotation
annotations:
iam.amazonaws.com/permitted: .*
as stated in the docs
along with
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
in the trust relationships on the iam-role picked up the node where the pods are being scheduled.
Not sure if it’s related, but we recently upgraded k8s from 1.8.4 to 1.8.9, but I guess that shouldn’t be the problem.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 16 (16 by maintainers)
Commits related to this issue
- #46: handle cache.DeletedFinalStateUnknown in pod cache during process() — committed to uswitch/kiam by pingles 6 years ago
- Use IndexerInformer rather than controller and queue (#51) This helps to simplify the implementation of the pod and namespace caches, as well as better handling errors from `cache.DeletedFinalStateUn... — committed to uswitch/kiam by pingles 6 years ago
strange, killing the kiam agent/server pods and letting the new ones come up fixed the issue. Any ideas on it?