k3s: ubuntu 18.04 iptables errors

I have tried to build a two node ubuntu 18.04 setup. The server is running on virtnuc1, the agent is running on virtnuc2:

kubectl get nodes                                                                                                                                                                                       
NAME       STATUS     ROLES    AGE   VERSION
virtnuc1   Ready      <none>   51m   v1.13.3-k3s.6
virtnuc2   NotReady   <none>   46m   v1.13.3-k3s.6

Describe the bug There are a lot of iptalbes errors on the virtnuc2 node:

Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.680087    1682 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.712342    1682 flannel.go:75] Wrote subnet file to /run/flannel/subnet.env
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.712360    1682 flannel.go:79] Running backend.
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.712366    1682 vxlan_network.go:60] watching for new subnet leases
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.785155    1682 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.785200    1682 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.785560    1682 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.785682    1682 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.786118    1682 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.786382    1682 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.787100    1682 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.787297    1682 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.1.0/24 -j RETURN
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.788943    1682 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.789247    1682 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.791257    1682 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.792596    1682 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.912990    1682 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.1.0/24 -j RETURN
Mar 02 16:17:23 virtnuc2 k3s[1682]: I0302 16:17:23.914002    1682 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
Mar 02 16:17:23 virtnuc2 k3s[1682]: E0302 16:17:23.978980    1682 proxier.go:232] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-PORTALS-HOST'
Mar 02 16:17:23 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:23 virtnuc2 k3s[1682]: E0302 16:17:23.979889    1682 proxier.go:238] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-PORTALS-CONTAINER
Mar 02 16:17:23 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:23 virtnuc2 k3s[1682]: E0302 16:17:23.996596    1682 proxier.go:246] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-HOST'
Mar 02 16:17:23 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:23 virtnuc2 k3s[1682]: E0302 16:17:23.997532    1682 proxier.go:252] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-CONTAINE
Mar 02 16:17:23 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:23 virtnuc2 k3s[1682]: E0302 16:17:23.998147    1682 proxier.go:259] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-NON-LOCA
Mar 02 16:17:23 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:24 virtnuc2 k3s[1682]: E0302 16:17:24.000540    1682 proxier.go:563] Error removing iptables rules in ipvs proxier: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-S
Mar 02 16:17:24 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:24 virtnuc2 k3s[1682]: E0302 16:17:24.001206    1682 proxier.go:563] Error removing iptables rules in ipvs proxier: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-S
Mar 02 16:17:24 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:24 virtnuc2 k3s[1682]: E0302 16:17:24.001811    1682 proxier.go:563] Error removing iptables rules in ipvs proxier: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-P
Mar 02 16:17:24 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:24 virtnuc2 k3s[1682]: E0302 16:17:24.002428    1682 proxier.go:563] Error removing iptables rules in ipvs proxier: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-F
Mar 02 16:17:24 virtnuc2 k3s[1682]: Try `iptables -h' or 'iptables --help' for more information.
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.347379    1682 server.go:464] Version: v1.13.3-k3s.6
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.354142    1682 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.354252    1682 conntrack.go:52] Setting nf_conntrack_max to 131072
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.368281    1682 conntrack.go:83] Setting conntrack hashsize to 32768
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.347379    1682 server.go:464] Version: v1.13.3-k3s.6
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.354142    1682 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.354252    1682 conntrack.go:52] Setting nf_conntrack_max to 131072
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.368281    1682 conntrack.go:83] Setting conntrack hashsize to 32768
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.377299    1682 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.377495    1682 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.377782    1682 config.go:102] Starting endpoints config controller
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.377795    1682 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.377813    1682 config.go:202] Starting service config controller
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.377817    1682 controller_utils.go:1027] Waiting for caches to sync for service config controller
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.477919    1682 controller_utils.go:1034] Caches are synced for endpoints config controller
Mar 02 16:17:24 virtnuc2 k3s[1682]: I0302 16:17:24.477922    1682 controller_utils.go:1034] Caches are synced for service config controller
Mar 02 16:19:24 virtnuc2 k3s[1682]: E0302 16:19:24.736050    1682 remote_runtime.go:173] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[stri
Mar 02 16:19:24 virtnuc2 k3s[1682]: E0302 16:19:24.736130    1682 kuberuntime_sandbox.go:200] ListPodSandbox failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:19:24 virtnuc2 k3s[1682]: E0302 16:19:24.736154    1682 kubelet_pods.go:1005] Error listing containers: &status.statusError{Code:4, Message:"context deadline exceeded", Details:[]*any.Any(nil)}
Mar 02 16:19:24 virtnuc2 k3s[1682]: E0302 16:19:24.736173    1682 kubelet.go:1903] Failed cleaning pods: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:19:24 virtnuc2 k3s[1682]: E0302 16:19:24.788788    1682 remote_runtime.go:173] ListPodSandbox with filter nil from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline ex
Mar 02 16:19:24 virtnuc2 k3s[1682]: E0302 16:19:24.789058    1682 kuberuntime_sandbox.go:200] ListPodSandbox failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:19:24 virtnuc2 k3s[1682]: E0302 16:19:24.789074    1682 generic.go:203] GenericPLEG: Unable to retrieve pods: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:19:32 virtnuc2 k3s[1682]: E0302 16:19:32.902644    1682 remote_runtime.go:173] ListPodSandbox with filter &PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{},} from runtime service faile
Mar 02 16:19:32 virtnuc2 k3s[1682]: E0302 16:19:32.903204    1682 eviction_manager.go:243] eviction manager: failed to get summary stats: failed to list pod stats: failed to list all pod sandboxes: rpc error: c
Mar 02 16:20:23 virtnuc2 k3s[1682]: E0302 16:20:23.151006    1682 remote_runtime.go:173] ListPodSandbox with filter nil from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline ex
Mar 02 16:20:23 virtnuc2 k3s[1682]: E0302 16:20:23.151068    1682 kuberuntime_sandbox.go:200] ListPodSandbox failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:20:23 virtnuc2 k3s[1682]: E0302 16:20:23.151082    1682 kubelet.go:1201] Container garbage collection failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:20:33 virtnuc2 k3s[1682]: I0302 16:20:33.093834    1682 setters.go:421] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-03-02 16:20:33.09380591 +0000 UTC m=+192.030929943 LastTr
Mar 02 16:21:23 virtnuc2 k3s[1682]: E0302 16:21:23.789257    1682 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:21:23 virtnuc2 k3s[1682]: E0302 16:21:23.789330    1682 kuberuntime_sandbox.go:58] CreatePodSandbox for pod "tiller-deploy-6cf89f5895-6x2f2_kube-system(85c66c3a-3d02-11e9-b9c5-080027905085)" failed: r
Mar 02 16:21:23 virtnuc2 k3s[1682]: E0302 16:21:23.789345    1682 kuberuntime_manager.go:677] createPodSandbox for pod "tiller-deploy-6cf89f5895-6x2f2_kube-system(85c66c3a-3d02-11e9-b9c5-080027905085)" failed: 
Mar 02 16:21:23 virtnuc2 k3s[1682]: E0302 16:21:23.789435    1682 pod_workers.go:190] Error syncing pod 85c66c3a-3d02-11e9-b9c5-080027905085 ("tiller-deploy-6cf89f5895-6x2f2_kube-system(85c66c3a-3d02-11e9-b9c5-
Mar 02 16:22:33 virtnuc2 k3s[1682]: I0302 16:22:33.241102    1682 setters.go:421] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-03-02 16:22:33.241062781 +0000 UTC m=+312.178186815 LastT
Mar 02 16:23:24 virtnuc2 k3s[1682]: E0302 16:23:24.735964    1682 remote_runtime.go:173] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[stri
Mar 02 16:23:24 virtnuc2 k3s[1682]: E0302 16:23:24.736051    1682 kuberuntime_sandbox.go:200] ListPodSandbox failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:23:24 virtnuc2 k3s[1682]: E0302 16:23:24.736062    1682 kubelet_pods.go:1021] Error listing containers: &status.statusError{Code:4, Message:"context deadline exceeded", Details:[]*any.Any(nil)}
Mar 02 16:23:24 virtnuc2 k3s[1682]: E0302 16:23:24.736077    1682 kubelet.go:1903] Failed cleaning pods: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:23:24 virtnuc2 k3s[1682]: I0302 16:23:24.736094    1682 kubelet.go:1752] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m58.946925001s ago; threshold is 3m0s]
Mar 02 16:23:24 virtnuc2 k3s[1682]: E0302 16:23:24.792416    1682 remote_runtime.go:173] ListPodSandbox with filter nil from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline ex
Mar 02 16:23:24 virtnuc2 k3s[1682]: E0302 16:23:24.792466    1682 kuberuntime_sandbox.go:200] ListPodSandbox failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 02 16:23:24 virtnuc2 k3s[1682]: E0302 16:23:24.792477    1682 generic.go:203] GenericPLEG: Unable to retrieve pods: rpc error: code = DeadlineExceeded desc = context deadline exceeded

Expected behavior The second node should run like the first node without iptables errors. I am not sure if the other errors are related…

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 2
  • Comments: 19 (6 by maintainers)

Most upvoted comments

I’m hitting this in Jan 2021 with a new install of k3s “stable” on a Rpi running “buster”. Then tried “latest” - same issue.

 pi@raspberrypi:~ $ sudo k3s  check-config

Verifying binaries in /var/lib/rancher/k3s/data/c8ca2ef57aa8ef0951f3d6c5aafbe2354ef69054c8011f5859283a9d282e4b75/bin:
- sha256sum: good
- links: good

System:
- /usr/sbin iptables v1.8.2 (nf_tables): should be older than v1.8.0 or in legacy mode (fail)
- swap: should be disabled
- routes: ok

: 
: 

pi@raspberrypi:~ $ cat /etc/apt/sources.list | grep deb\ 
deb http://raspbian.raspberrypi.org/raspbian/ buster main contrib non-free rpi
pi@raspberrypi:~ $ uname -a
Linux raspberrypi 5.4.79-v7l+ #1373 SMP Mon Nov 23 13:27:40 GMT 2020 armv7l GNU/Linux
pi@raspberrypi:~ $ 

I have a debian Buster installation and I see no iptable rules at all. It has version 1.8.2 and it seems boken for the 0.9.1 release of k3s?

Please check the output of iptables --version @joakimr-axis. nf_tables will cause an issue for newer versions of iptables like v1.8, should be legacy mode or an older version. Also see https://github.com/rancher/k3s/issues/703