kubernetes: iptables-restore invalid option in kube-proxy
/kind bug
/sig network
when start service kube-proxy /var/log/message : I0129 19:40:03.709536 21148 iptables.go:381] running iptables-restore [-w5 -T nat --noflush --counters] E0129 19:40:03.712093 21148 proxier.go:792] Failed to execute iptables-restore for nat: exit status 1 (iptables-restore: invalid option – ‘5’ iptables-restore: line 7 failed )
And i run command
iptables-restore -w5
iptables-restore: invalid option -- '5'
then run comman with arg value split by space
iptables-restore -w 5
no error output. I’m worried about the iptables-restore will not execute because of this error. Please fix it, thanks a lot.
Environment:
- Kubernetes version (use
kubectl version):v1.9.2 - Cloud provider or hardware configuration:None
- OS (e.g. from /etc/os-release):CentOS Linux release 7.4.1708 (Core)
- Kernel (e.g.
uname -a):3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux - Install tools: binary install
- Others:
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 1
- Comments: 32 (14 by maintainers)
I have exactly same issue with kubernetes 1.11.1 and ipvs:
The log is only visible during kube-proxy startup (only)!
But there is a one strange thing. I am pretty sure that I am using ipvs:
But based on the log message:
the logs is somehow from iptables proxier: https://github.com/kubernetes/kubernetes/blob/v1.11.1/pkg/proxy/iptables/proxier.go#L423
The ipvs proxier have similar output but different line: https://github.com/kubernetes/kubernetes/blob/v1.11.1/pkg/proxy/ipvs/proxier.go#L1139
How come that iptables proxier is executed when ipvs is configured?
For anyone finding this thread the below workaround may work for now with Centos at least:
Ok so as a work around I have done:
and adding iptables-* to the exclude in /etc/yum.conf
This is NOT a good fix as it rolling iptables back 4 releases but until kube-proxy is fixed this may be the only viable option for some
I don’t have experience in this area, but if you run somthing like:
you’ll be able to see the kubelet spawn the iptables-restore process, and see the input that it writes to it. If it’s writing directly to iptables-restore you might be able to limit the output by running:
This advise is more generally about debugging the invocation of processes, and is here only to provide some guide to debug this further. See
man stracefor details.(the logs will be in
/tmp/strace.log)From
journalctl:From
strace -p ${PID} -f -s 8096 -e process:To me, this seems pretty definitive – that block is wrong.
PR to fix:
https://github.com/kubernetes/kubernetes/pull/59181
But it’s not complete. There seem to be issues more generally with that chunk.