kubernetes: IPVS error log occupation with externalTrafficPolicy: Local option in Service

What happened:

IPVS error log occupation with externalTrafficPolicy: Local

$ dmesg
...
[23709.680898] IPVS: rr: TCP 192.168.0.52:80 - no destination available
[23710.709824] IPVS: rr: TCP 192.168.0.52:80 - no destination available
[23832.428700] IPVS: rr: TCP 127.0.0.1:30209 - no destination available
[23833.461818] IPVS: rr: TCP 127.0.0.1:30209 - no destination available
...

What you expected to happen:

Do not print the error log.

How to reproduce it (as minimally and precisely as possible):

$  kubectl get nodes -o wide
NAME     STATUS   ROLES                  AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
node01   Ready    control-plane,master   13d   v1.20.5   192.168.0.61   <none>        Ubuntu 18.04.4 LTS   5.4.0-64-generic   docker://19.3.12
node02   Ready    <none>                 13d   v1.20.5   192.168.0.62   <none>        Ubuntu 18.04.4 LTS   5.4.0-64-generic   docker://19.3.12
$ kubectl get service
NAME              TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
kubernetes        ClusterIP      10.96.0.1        <none>           443/TCP        13d
nginx-ipv4        LoadBalancer   10.96.82.45      192.168.0.52     80:30209/TCP   54m

$ kubectl get service nginx-ipv4 -o yaml
...
spec:
  clusterIP: 10.96.82.45
  clusterIPs:
  - 10.96.82.45
  externalTrafficPolicy: Local
  healthCheckNodePort: 31583
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - nodePort: 30209
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 192.168.0.52
$ kubectl get pod -o wide
NAME                                  READY   STATUS    RESTARTS   AGE   IP                NODE     NOMINATED NODE   READINESS GATES
nginx-658f4cf99f-654bc                1/1     Running   0          55m   192.167.140.91    node02   <none>           <none>
nginx-658f4cf99f-jp27p                1/1     Running   0          55m   192.167.140.92    node02   <none>           <none>
nginx-658f4cf99f-ld9th                1/1     Running   0          55m   192.167.140.82    node02   <none>           <none>
(node01) $ ipvsadm -L
...
TCP  192.168.0.52:80 rr
TCP  192.168.0.61:30209 rr
...

(node01) $ curl 192.168.0.52:80
(node01) $ curl localhost:30209

[23709.680898] IPVS: rr: TCP 192.168.0.52:80 - no destination available
[23710.709824] IPVS: rr: TCP 192.168.0.52:80 - no destination available
[23832.428700] IPVS: rr: TCP 127.0.0.1:30209 - no destination available
[23833.461818] IPVS: rr: TCP 127.0.0.1:30209 - no destination available

As you can see, “nginx-ipv4” service endpoint are nginx pods in “node02”. Because “nginx-ipv4” service has “externalTrafficPolicy: Local” and node01 doesn’t have nginx pod, node01 doesn’t have IPVS destination for 192.168.0.52:80 and 192.168.0.61:30209. And this makes IPVS occupy the kernel log.

I know this is intended. But I think kube-proxy must set the filter ruless to iptables for 192.168.0.52:80 and 192.168.0.61:30209 to prevent IPVS error log occupation. Is there any plan for this issue??

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
    • v1.20.5
  • Cloud provider or hardware configuration:
    • my local cluster
  • OS (e.g: cat /etc/os-release):
NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • Kernel (e.g. uname -a):
    • Linux node01 5.4.0-64-generic #72~18.04.1-Ubuntu SMP Fri Jan 15 14:06:34 UTC 2021 x86_64 x86_64 x86_64 GNU/Linu
  • Install tools:
    • kubeadm
  • Network plugin and version (if this is a network-related bug):
  • Others:

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 30 (18 by maintainers)

Most upvoted comments

Great! Thanks for testing 👍 (and verify that I got the build right 😄 )

To add/remove iptables rules should not be taken lightly. There is plenty of room for mistakes in cases that are hard to test like interferrence with other actors that uses iptables like CNI-plugins, rapid updates, and restarts (upgrade).

I have not got https://github.com/kubernetes/kubernetes/pull/97081 accepted yet, but if that PR is accepted I will close this issue also. I feel pretty confident that #97081 will not introduce any bugs.

I think the extra packets can be ignored (unless you probe like crazy).

“no destination available” means no servers or “real servers”.

I will check how to build a kube-proxy image, but I have never done it so it may take some time.

If I apply this PR, How are the rules of iptables/IPVS changed?

The iptables rules are not changed, but you do have real servers defined on all nodes, not just the ones where server pods are running. That will prevent the loggings.

Problem is that these are kernel messages, not K8s. I will check if there is some configuration in ipvs to suspress them or set the syslog level for them.