amazon-vpc-cni-k8s: ENI in Secondary VPC CIDR not getting created
Region: us-east-1 AMI : ami-0c24db5df6badc35a CNI : 1.3
Instance IAM role has arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
Primary VPC CIDR : 192.168.0.0/16 Secondary VPC CIDR : 100.64.0.0/16
EC2 Instances Subnet CIDR : 192.168.0.0/18 Expecting CNI to be using secondary CIDR subnet range : 100.64.0.0/22
Both the above subnets have the route to 0.0.0.0/0 via NAT gateway.
Upgraded to 1.3 plugin using https://docs.aws.amazon.com/eks/latest/userguide/cni-upgrades.html Added the following to aws-node daemonset
- name: AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
value: "true"
- name: AWS_VPC_K8S_CNI_EXTERNALSNAT
value: "true"
- name: ENI_CONFIG_LABEL_DEF
value: failure-domain.beta.kubernetes.io/zone
Passed --use-max-pods=true during instance bootstrap
Created eniconfigs named as us-east-1a with subnet value corresponding to 100.64.0.0/22 Terminated the instance in ASG to get new one.
Notice that the ENI with secondary CIDR range does not get created. Logged in to instance to look ipamd logs and found the following
019-01-26T03:06:34Z [INFO] Setting myENI to: default
2019-01-26T03:06:36Z [INFO] Handle ENIConfig Add/Update: us-east-1a, [sg-0e136ced130dcae47], subnet-05ed4d82a4ed8a2cc
2019-01-26T03:06:36Z [INFO] Handle ENIConfig Add/Update: us-east-1b, [sg-0e136ced130dcae47], subnet-0bf32cbc60e246a59
2019-01-26T03:06:36Z [INFO] Handle corev1.Node: ip-192-168-104-171.ec2.internal, map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true]
2019-01-26T03:06:36Z [INFO] Setting myENI to: default
2019-01-26T03:06:36Z [INFO] Handle corev1.Node: ip-192-168-31-146.ec2.internal, map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true]
2019-01-26T03:06:37Z [DEBUG] Skip the primary ENI for need IP check
2019-01-26T03:06:37Z [DEBUG] IP pool stats: total = 0, used = 0, c.currentMaxAddrsPerENI = 14, c.maxAddrsPerENI = 14
2019-01-26T03:06:37Z [DEBUG] Start increasing IP Pool size
2019-01-26T03:06:37Z [ERROR] Failed to get pod ENI config
2019-01-26T03:06:37Z [DEBUG] Reconciling ENI/IP pool info...
2019-01-26T03:06:37Z [DEBUG] Total number of interfaces found: 1
2019-01-26T03:06:37Z [DEBUG] Found eni mac address : 02:ec:54:25:ed:82
2019-01-26T03:06:37Z [DEBUG] Using device number 0 for primary eni: eni-0ec34990890e8c0f4
2019-01-26T03:06:37Z [DEBUG] Found eni: eni-0ec34990890e8c0f4, mac 02:ec:54:25:ed:82, device 0
2019-01-26T03:06:37Z [DEBUG] Found cidr 192.168.64.0/18 for eni 02:ec:54:25:ed:82
2019-01-26T03:06:37Z [DEBUG] Found ip addresses [192.168.104.171] on eni 02:ec:54:25:ed:82
2019-01-26T03:06:37Z [DEBUG] Reconcile existing ENI eni-0ec34990890e8c0f4 IP pool
2019-01-26T03:06:37Z [DEBUG] Reconcile and skip primary IP 192.168.104.171 on eni eni-0ec34990890e8c0f4
2019-01-26T03:06:37Z [DEBUG] Successfully Reconciled ENI/IP pool
2019-01-26T03:06:41Z [INFO] Handle ENIConfig Add/Update: us-east-1a, [sg-0e136ced130dcae47], subnet-05ed4d82a4ed8a2cc
2019-01-26T03:06:41Z [INFO] Handle ENIConfig Add/Update: us-east-1b, [sg-0e136ced130dcae47], subnet-0bf32cbc60e246a59
2019-01-26T03:06:41Z [INFO] Handle corev1.Node: ip-192-168-104-171.ec2.internal, map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true]
2019-01-26T03:06:41Z [INFO] Setting myENI to: default
If i manually create a ENI in 100.64.0.0/22 and attach to the instance, everything works good. Wondering, whats going on with ipamd not able to create the ENI with secondary VPC CIDR ???
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 2
- Comments: 18 (5 by maintainers)
Everyone having this problem can juset set
AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=false
.You don’t need to create ENIConfig resources, set k8s.amazonaws.com/eniConfig node label or set aws-node ENI_CONFIG_LABEL_DEF=true.
Default CNI behaviour when AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=false is to create pods with same network as worker eni.
I don’t get why you need to duplicate the workers network information in ENIConfig - must be a different usecase than mine. But if you followed https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html guide then you already created workers with the desired subnet and SG.
I’ve got to imagine you mean something else or I have something not set right for this, because this is the log of the aws-node pod on the node in question. Doesn’t have much behind it.
I got bitten by the same thing.
While
ENI_CONFIG_LABEL_DEF
is in the documentation, it is actually not a valid env var for1.3
.In order to get it working, you need to compile from
master
. Hopefully they will release a new version soon.@till-krauss Thanks! That updating the image fixed the issue I was having. Wondering when these changes will make it into an official release.
@jicowan Just found out that it’s simply a strange release from the CNI we’re using by default. I assume you’re using the release 1.3.2 (the newest one). 1.3.2: https://github.com/aws/amazon-vpc-cni-k8s/blob/v1.3.2/pkg/eniconfig/eniconfig.go current master: https://github.com/aws/amazon-vpc-cni-k8s/blob/master/pkg/eniconfig/eniconfig.go
There’s no mention about any eniconfig setting via labels in the 1.3.2 release although it’s in the master branch since January 12. latest.
It’s as @dadux already said, it’s simply not implemented. I built it by myself two days ago: https://cloud.docker.com/repository/docker/tilmankrauss/amazon-k8s-cni. Just the corresponding docker image for the master branch.
Once replaced in daemonset, it works like a charm:).
Just patch your aws-node :
Please note that the docker image is just a build from a arbitrary state within the master branch, nothing like a release;).