aws-efs-csi-driver: efs-csi-controller won't start if IMDS access is blocked
/kind bug
What happened?
With IMDS disabled per best practices (https://docs.aws.amazon.com/eks/latest/userguide/best-practices-security.html) on Bottlerocket hosts, pods from the efs-csi-controller deployment will not start.
We need something similar for the controller, or for it to just not need IMDS access to begin with.
F0127 18:13:01.145009 1 driver.go:54] could not get metadata from AWS: EC2 instance metadata is not available
is emitted to the log and a crash occurs.
What you expected to happen?
I expected efs-csi-controller to start. Passing the region/instance ID/other IMDS-sourced information would be acceptable.
How to reproduce it (as minimally and precisely as possible)?
- Block IMDS access
- Deploy
efs-csi-controller
Anything else we need to know?:
The DaemonSet uses hostNetwork: true to regain access to the IMDS (https://github.com/kubernetes-sigs/aws-efs-csi-driver/pull/188)
Environment
- Kubernetes version (use
kubectl version): EKS 1.18 - Driver version:
master
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 3
- Comments: 19 (6 by maintainers)
@wongma7 Any updates on this issue ? Really looking forward removing hostNetworking … 😉
yes, that is totally reasonable, the EFS driver needs to be able to run without hostnetwork/imds for exactly the same reasons as EBS. The effort entails copying the code and test (an end-to-end test on a “real” EKS cluster with nodes whose IMDS is disabled) from EBS to here. I don’t have an ETA but that is my plan
Have raised a PR that I think should resolve this issue here: https://github.com/kubernetes-sigs/aws-efs-csi-driver/pull/681