kubernetes: kube-proxy in ipvs mode doesn not correctly work with load balancer

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened: I tried ipvs mode in kube-proxy instead of iptables. It works fine with all service types except loadbalancer. I am using MetalLb as loadbalancer and I can not make it work. In iptables mode it works perfect, I just do not want go back to it from ipvs

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
  • MetalLb

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 78 (44 by maintainers)

Commits related to this issue

Most upvoted comments

BTW setting ARP OFF on kube dummy interface is actually not enough. System begins reply on ARP requests with MAC of any another available interface. To solve this problem an additional action is required

echo 1 > /proc/sys/net/ipv4/conf/${ANY_INTERFACE_WTH_A_CARRIER}/arp_ignore

This means reply only if IP matches interface IP. Default value 0 make system reply on if IP matches any IP on the system including kube dummy interface

It doesn’t, unfortunately. I just tested this by loading a v1.11 test build of kube-proxy into my bare metal cluster, switched kube-proxy to IPVS mode, and the service IPs are not programmed on kube-ipvs0, only ClusterIP services.

ipvsadm shows that there are rules for loadBalancer and externalIPs, but the IPs are no present in ip addr kube-ipvs0, so IPVS never handles the traffic. This is with image gs://kubernetes-release-dev/ci/v1.11.0-alpha.0.2614+3ed4355f431dd5-bazel/bin/linux/amd64/kube-proxy.tar , which afaict is the very latest dev build of kube-proxy.

@kubernetes/sig-network-bugs

I’ve just tested on a kubeadm 1.9.3 cluster, with IPVS mode enabled on kube-proxy. Both externalIPs and status.loadBalancer.ingress[].ip seem to be ignored by kube-proxy in IPVS mode, so external traffic is completely unrouteable.

In contrast, kube-proxy in iptables mode creates DNAT/SNAT rules for external and loadbalancer IPs.

This should be a blocker for IPVS mode GA, because it will break load-balancer implementations in all major clouds, as well as various bare-metal implementations like MetalLB and custom solutions relying on externalIPs.

@bamb00 I have switched to kube-router and removed kube-proxy completely. Kube-router gives you IPVS proxy and routing in a single software

@danderson you can not just add loadbalancer IP to kube-ipvs0. In that case each node in the cluster will respond to ARP requests. But only and only metallb should do this. If we add IP to kube-ipvs0 we must somehow prevent linux from responding to ARP requests on that IP…