kubernetes: hypekube kube-proxy: Chain 'KUBE-MARK-DROP' does not exist
What happened: kube-proxy v1.16.11 fails to execute iptables-restore:
{"log":"I0618 15:35:56.843819 1 service.go:373] Adding new service port \"kube-system/kube-dns:dns\" at 10.223.0.10:53/UDP\n","stream":"stderr","time":"2020-06-18T15:35:56.844167118Z"}
{"log":"I0618 15:35:56.843832 1 service.go:373] Adding new service port \"kube-system/calico-typha:calico-typha\" at 10.223.28.54:5473/TCP\n","stream":"stderr","time":"2020-06-18T15:35:56.844172318Z"}
{"log":"I0618 15:35:56.843844 1 service.go:373] Adding new service port \"default/kubernetes:https\" at 10.223.0.1:443/TCP\n","stream":"stderr","time":"2020-06-18T15:35:56.844176518Z"}
{"log":"I0618 15:35:56.843857 1 service.go:373] Adding new service port \"kube-system/allow-udp-egress:dummy\" at 10.223.112.137:1234/UDP\n","stream":"stderr","time":"2020-06-18T15:35:56.844181318Z"}
{"log":"I0618 15:35:56.843876 1 service.go:373] Adding new service port \"kube-system/metrics-server:\" at 10.223.80.184:443/TCP\n","stream":"stderr","time":"2020-06-18T15:35:56.844305721Z"}
{"log":"I0618 15:35:56.843890 1 service.go:373] Adding new service port \"kube-system/vpn-shoot:openvpn\" at 10.223.86.14:4314/TCP\n","stream":"stderr","time":"2020-06-18T15:35:56.844312221Z"}
{"log":"I0618 15:35:57.102753 1 proxier.go:1519] Opened local port \"nodePort for kube-system/vpn-shoot:openvpn\" (:32175/tcp)\n","stream":"stderr","time":"2020-06-18T15:35:57.111034942Z"}
{"log":"I0618 15:35:57.102963 1 proxier.go:1519] Opened local port \"nodePort for kube-system/allow-udp-egress:dummy\" (:31061/udp)\n","stream":"stderr","time":"2020-06-18T15:35:57.111055742Z"}
{"log":"I0618 15:42:28.998132 1 proxier.go:700] Stale udp service kube-system/kube-dns:dns -\u003e 10.223.0.10\n","stream":"stderr","time":"2020-06-18T15:42:28.998299797Z"}
{"log":"E0618 15:42:29.040884 1 proxier.go:1418] Failed to execute iptables-restore: exit status 2 (iptables-restore v1.8.2 (nf_tables): Chain 'KUBE-MARK-DROP' does not exist\n","stream":"stderr","time":"2020-06-18T15:42:29.041041355Z"}
{"log":"Error occurred at line: 49\n","stream":"stderr","time":"2020-06-18T15:42:29.041079457Z"}
{"log":"Try `iptables-restore -h' or 'iptables-restore --help' for more information.\n","stream":"stderr","time":"2020-06-18T15:42:29.041096658Z"}
{"log":")\n","stream":"stderr","time":"2020-06-18T15:42:29.041102958Z"}
{"log":"I0618 15:42:29.040966 1 proxier.go:1421] Closing local ports after iptables-restore failure\n","stream":"stderr","time":"2020-06-18T15:42:29.041111559Z"}
{"log":"E0618 15:42:59.080751 1 proxier.go:1418] Failed to execute iptables-restore: exit status 2 (iptables-restore v1.8.2 (nf_tables): Chain 'KUBE-MARK-DROP' does not exist\n","stream":"stderr","time":"2020-06-18T15:42:59.080906449Z"}
{"log":"Error occurred at line: 49\n","stream":"stderr","time":"2020-06-18T15:42:59.080971252Z"}
{"log":"Try `iptables-restore -h' or 'iptables-restore --help' for more information.\n","stream":"stderr","time":"2020-06-18T15:42:59.080979052Z"}
{"log":")\n","stream":"stderr","time":"2020-06-18T15:42:59.080984652Z"}
{"log":"I0618 15:42:59.080904 1 proxier.go:1421] Closing local ports after iptables-restore failure\n","stream":"stderr","time":"2020-06-18T15:42:59.081031354Z"}
{"log":"E0618 15:43:29.122880 1 proxier.go:1418] Failed to execute iptables-restore: exit status 2 (iptables-restore v1.8.2 (nf_tables): Chain 'KUBE-MARK-DROP' does not exist\n","stream":"stderr","time":"2020-06-18T15:43:29.123043015Z"}
{"log":"Error occurred at line: 70\n","stream":"stderr","time":"2020-06-18T15:43:29.123079817Z"}
{"log":"Try `iptables-restore -h' or 'iptables-restore --help' for more information.\n","stream":"stderr","time":"2020-06-18T15:43:29.123084317Z"}
{"log":")\n","stream":"stderr","time":"2020-06-18T15:43:29.123087817Z"}
{"log":"I0618 15:43:29.123121 1 proxier.go:1421] Closing local ports after iptables-restore failure\n","stream":"stderr","time":"2020-06-18T15:43:29.123202822Z"}
{"log":"E0618 15:43:59.165741 1 proxier.go:1418] Failed to execute iptables-restore: exit status 2 (iptables-restore v1.8.2 (nf_tables): Chain 'KUBE-MARK-DROP' does not exist\n","stream":"stderr","time":"2020-06-18T15:43:59.165944043Z"}
{"log":"Error occurred at line: 44\n","stream":"stderr","time":"2020-06-18T15:43:59.165977044Z"}
{"log":"Try `iptables-restore -h' or 'iptables-restore --help' for more information.\n","stream":"stderr","time":"2020-06-18T15:43:59.165984044Z"}
{"log":")\n","stream":"stderr","time":"2020-06-18T15:43:59.165989644Z"}
{"log":"I0618 15:43:59.165841 1 proxier.go:1421] Closing local ports after iptables-restore failure\n","stream":"stderr","time":"2020-06-18T15:43:59.165995045Z"}
{"log":"E0618 15:44:29.215236 1 proxier.go:1418] Failed to execute iptables-restore: exit status 2 (iptables-restore v1.8.2 (nf_tables): Chain 'KUBE-MARK-DROP' does not exist\n","stream":"stderr","time":"2020-06-18T15:44:29.21548492Z"}
{"log":"Error occurred at line: 49\n","stream":"stderr","time":"2020-06-18T15:44:29.215525921Z"}
{"log":"Try `iptables-restore -h' or 'iptables-restore --help' for more information.\n","stream":"stderr","time":"2020-06-18T15:44:29.215532021Z"}
{"log":")\n","stream":"stderr","time":"2020-06-18T15:44:29.215561822Z"}
{"log":"I0618 15:44:29.215337 1 proxier.go:1421] Closing local ports after iptables-restore failure\n","stream":"stderr","time":"2020-06-18T15:44:29.215566623Z"}
{"log":"E0618 15:44:59.259447 1 proxier.go:1418] Failed to execute iptables-restore: exit status 2 (iptables-restore v1.8.2 (nf_tables): Chain 'KUBE-MARK-DROP' does not exist\n","stream":"stderr","time":"2020-06-18T15:44:59.259569268Z"}
The same setup work perfectly fine with kube-proxy v1.16.10.
What you expected to happen: kube-proxy to be able to execute iptables-restore.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version):
$ k version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-21T14:51:23Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.11", GitCommit:"436254b798f772bcb8e67dcfe122e46500eeb254", GitTreeState:"clean", BuildDate:"2020-06-17T11:41:28Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration:
- OS (e.g:
cat /etc/os-release):
# cat /etc/os-release
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=2303.3.0
VERSION_ID=2303.3.0
BUILD_ID=2019-12-02-2049
PRETTY_NAME="Container Linux by CoreOS 2303.3.0 (Rhyolite)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
- Kernel (e.g.
uname -a):
# uname -a
Linux foo 4.19.86-coreos #1 SMP Mon Dec 2 20:13:38 -00 2019 x86_64 Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz GenuineIntel GNU/Linux
- Install tools:
- Network plugin and version (if this is a network-related bug): Calico v3.13.4
- Others:
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 1
- Comments: 22 (19 by maintainers)
We’ve published v1.18.5-rc.1, v1.17.8-rc.1, and v1.16.12-rc.1, which include new hyperkube images.
Can you test these and let us know if this resolves your issue? We’re holding off on any patch releases until we get feedback here, so please let report back when you can.
cc: @kubernetes/release-engineering
I checked now,
k8s.gcr.io/hyperkube:v1.16.12-rc.1is fixing this issue (caused by and reproducible withk8s.gcr.io/hyperkube:v1.16.11). Thank you!I think the problem is that the hyperkube image, unlike the “real” kube-proxy image, doesn’t have the autodetect stuff, and so “hyperkube kube-proxy” just uses whatever the default version of iptables is in its container. So when the image got updated to the new debian base, it flipped from always-use-iptables-legacy to always-use-iptables-nft.