amazon-vpc-cni-k8s: Pods cannot talk to cluster IPs on Ubuntu 2204
What happened:
After upgrading clusters to use Ubuntu 22.04 by default, the kOps e2e tests started failing for this CNI: https://testgrid.k8s.io/kops-network-plugins#kops-aws-cni-amazon-vpc
What seems to happen is that Pods do receive IPs, but they fail to talk across nodes. Calling e.g a ClusterIP service from the host works, but not from a Pod. Kube-proxy therefore should be working just fine.
I cannot see anything wrong in any logs. But what I do see is that there are AWS-related rules in the legacy iptables, while kube-proxy uses nftables. So my guess is that this is the cause of this behavior. nft and legacy iptables must not be mixed anyway.
Attach logs Example logs here: https://gcsweb.k8s.io/gcs/kubernetes-jenkins/logs/e2e-kops-aws-cni-amazon-vpc/1577618499142946816/artifacts/i-0d90e121da8bff687/
How to reproduce it (as minimally and precisely as possible):
kops create cluster --name test.kops-dev.srsandbox.io --cloud aws --networking=amazonvpc --zones=eu-central-1a,eu-central-1b,eu-central-1c --channel=alpha --master-count=3 --yes --kubernetes-version 1.25.0 --discovery-store=$KOPS_STATE_STORE/discovery --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20220921.1
About this issue
- Original URL
- State: open
- Created 2 years ago
- Comments: 40 (21 by maintainers)
@btalbot Ubuntu 22.04 works on EKS, you just have to set
MACAddressPolicy=none
like the official EKS AMI does: https://github.com/awslabs/amazon-eks-ami/blob/master/scripts/install-worker.sh#L104@pmankad96 @btalbot I suggest filing a support case for this so that it can be investigated further
Looks like it’s udev.
https://www.freedesktop.org/software/systemd/man/systemd.link.html
u20:
u22:
with
udevadm control --stop-exec-queue
mac address remains constant.We likely want to fix implementation on cni side, I suppose changing order to creatre veth pair in root namespace first and moving device to netns should be a reasonable workaround.