kubernetes: Upgrading docker 1.13 on nodes causes outbound container traffic to stop working
Kubernetes version (use kubectl version): v1.4.6, v1.5.1, likely many versions
Environment:
- Cloud provider or hardware configuration: Azure / Azure Container Service
- OS (e.g. from /etc/os-release): Ubuntu Xenial
- Kernel (e.g.
uname -a): latest 16.04-LTS kernel - Install tools: Cloud-Init + hyperkube
- Others:
Configuration Details:
kubeletruns in a container- master services run as static manifests
- kube-addon-manager runs as a static manifest
kube-proxyruns in iptables mode via a daemonset
What happened: After upgrading to docker 1.13.0 on the nodes, outbound container traffic stops working
What you expected to happen: Outbound container traffic to work (aka, I can hit the internet and service ips from inside the container)
How to reproduce it (as minimally and precisely as possible): Deploy an ACS Kubernets cluster. If the workaround has rolled out, then force upgrade docker to 1.13 (you’ll have to remove a pin we’re setting in /etc/apt/preferences.d).
Unclear if this repros on other configurations right now.
Anything else do we need to know:
No, I just don’t know where/how to best troubleshoot this.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 26
- Comments: 47 (32 by maintainers)
Commits related to this issue
- Packages: pin docker version to 1.12.6 due to https://github.com/kubernetes/kubernetes/issues/40182 — committed to evilmartians/chef-kubernetes by Bregor 7 years ago
- ugly insecure hack to work around https://github.com/kubernetes/kubernetes/issues/40182 — committed to bryanlarsen/minikube by bryanlarsen 7 years ago
- Ensure iptables forwarding is enabled Docker 1.13 changed how it set up iptables in a way that broke forwarding. We previously got away with it because we set the ip_forward sysctl, which meant that... — committed to justinsb/kops by justinsb 7 years ago
- Added taks to enable ipchains FORWARD policy to ACCEPT — committed to magic7s/ansible-kubeadm-contiv by magic7s 6 years ago
- tests: Remove iptables workaround for k8s. As this issue was already solved kubernetes/kubernetes#40182, we do not need to perform the sudo iptables -P FORWARD ACCEPT. Fixes #488 Signed-off-by: Gab... — committed to GabyCT/tests-1 by GabyCT 6 years ago
- tests: Remove iptables workaround for kubernetes tests As this issue was already solved kubernetes/kubernetes#40182, we do not need to perform the sudo iptables -P FORWARD ACCEPT. Fixes #488 Signed... — committed to GabyCT/tests-1 by GabyCT 6 years ago
Docker 1.13 changed the default iptables forwarding policy to DROP, which has effects like this.
You can change the policy to ACCEPT (which it was in Docker 1.12 and before) by running:
on every node. You need to run this in the host network namespace, not inside a pod namespace.
@feiskyer generally the Linux default is to have IP forwarding off. Docker used to turn it on across the board, which was (a) unnecessary and (b) a security issue. 1.13 removed this issue.
Docker add two specific rules which allow traffic off their bridge, and replies to come back:
CNI providers which do not use the
docker0bridge need to make similar provision.The way that services work with iptables mean that we will be initiating connections in to the bridge. As such, we need to be more permissive than the
docker0lines above as we won’t already have a connection for incoming connections. Here is what I had to do (my bridge is called cni0):This says: forward stuff both in and out to
cni0regardless.It used to be recommended to start Docker with
--iptables=falseand--ip-masq=falsein a Kubernetes deployment, would it be a viable approach to ensure these are effectively disabled using akubeletflag?ref. https://kubernetes.io/docs/getting-started-guides/scratch/#docker
@euank I’m not entirely convinced this is the responsibility of CNI plugins. It’s discussed above, and some plugins do include workarounds, but manipulating the host to allow forwarding of traffic feels squarely outside of a CNI plugin’s responsibilities as I understand them.
I think we still need a solution for this in Kubernetes.
CC @tmjd
Tested out and working
Environment:
closed with https://github.com/kubernetes/kubernetes/pull/52569
We’re adding a “allow this interface” chained plugin in to the CNI repository. It’s a clean solution to this problem.
https://github.com/containernetworking/plugins/pull/75