amazon-vpc-cni-k8s: Inconsistent ENI count attached to instances running 1.1.0
I have noticed an inconsistency in the number of ENI’s attached to EKS nodes. I have not been able to correlate it to anything in particular - not hitting the ENI limit yet, doesn’t seem related to an instance size but that was my initial thought. Seems like something was removed or added in 1.1.0 but the Changelog does not seem to call anything out related to this.
In one VPC, all the m4.larges have 2 ENIs apiece so I assumed this is how the CNI is supposed to work. Then I spun up workers in a different VPC and some had 1 ENI, some had 2, and some had 3 (m5.xlarge instance size). So, I tried m4.xlarges and they all have 1 ENI - I would expect a larger instance to have more ENIs given it is going to be supporting more pods.
Running amazon-k8s-cni:1.1.0
on all the m4.xlarges and m5.xlarges but not on the m4.larges (running 1.0.0) which are the ones with an even distribution. If this is somehow normal/changed behavior or if I misconfigured something, it would be great to know. Additionally, if I can provide any information to help troubleshoot this issue, please let me know.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 38 (13 by maintainers)
@dvohra thanks for your help. Indeed I can confirm that
WARM_IP_TARGET
works as intended. I was confused and thought the CNI code runs outside kube, but I now realise that thebootstrap.sh
script on the AMI just configures it as adaemonset
via yaml.