amazon-vpc-cni-k8s: after a node joins EKS cluster, cannot ping its second ENI's ip from peer vpc

I have VPC peering (and corresponding route tables/security groups setup correctly) between 2 regions. When I bring up a vanilla ec2 instance, i can ping both its ENI from the peer vpc. once I make the node join a EKS cluster, I can only ping the primary ENI. The ping to second ENI fails from vpc peer.

I tried with both
AWS_VPC_K8S_CNI_EXTERNALSNAT: true
and AWS_VPC_K8S_CNI_EXTERNALSNAT: false

I tried both disabling the ‘Source/Dest check’ on both the ENIs attached to the VM.

What happened: I observed that ping packets are coming in on eth1 and there was no reply.

Only after I configured sysctl net.ipv4.conf.all.rp_filter=2 I see that packets were being sent out of the VM.

Unfortunately - the packets are not reaching the other side of vpc peer. (for some reason, the packets go through fine if pinging from the same region - even from a different subnet).

Attach logs eks_i-02dc52684302e928d_2020-12-12_1836-UTC_0.6.2.tar.gz

What you expected to happen: The ping to host on both the host interfaces should continue to work.

How to reproduce it (as minimally and precisely as possible): bring up a EKS in a vpc. setup a peering connecting to another vpc. Add the routes for the remote vpc through the vpc peering connection. (do it on both sides of peering connection). From some vm on the remote peer, try pinging both the ENI of a VM.

Anything else we need to know?: The same observation holds true if I try to ping any POD whose IP is attached to the second ENI. If the pod is attached to the primary ENI, I am able to connect it from vpc peer.

Also, if you have any suggestions to enable communication with pods directly from vpc peer (irrespective of which ENI they are attached to), that will be very helpful.

Environment:

  • Kubernetes version (use kubectl version): 1.18
  • CNI Version 0.3.1 (from /etc/cni/net.d/10-aws.conflist)
  • OS (e.g: cat /etc/os-release): PRETTY_NAME=“Amazon Linux 2”
  • Kernel (e.g. uname -a): 4.14.203-156.332.amzn2.x86_64 AMI ID: ami-0af965363397f19f5

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 4
  • Comments: 16 (8 by maintainers)

Most upvoted comments

@nasuku @oniGino @YYSU I did some further testing this weekend and I learned that we don’t add routes for the primary IPs of the secondary ENIs which are attached to the EC2 instances by amazon-vpc-cni-k8s CNI. This is a design choice we made, where we do not assign the primary IP of secondary interfaces to the Pod.

To fix the issue, we will have setup route manually:

eth1=<primary IP of the secondary interface>
ip route add 192.168.128.0/19 dev eth1 table 10001 proto kernel scope link src $eth1
ip route add default via 192.168.128.1 dev eth1 table 10001
ip rule add from $eth1 lookup 10001 pref 32765

Is there a use-case where you want these IP to be routable?

Hi Suresh,

We received the information and will get back to you this week.