kubernetes: hairpin not set by kubelet with CNI plugin
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): kube-proxy clusterIP target hang
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Kubernetes version (use kubectl version): 1.6.2
Environment:
- Cloud provider or hardware configuration: AWS
- OS (e.g. from /etc/os-release): coreos-1235.6.0
- Kernel (e.g.
uname -a): 4.7.3-coreos-r2 - Install tools: cloudinit direct install
- Others:
What happened:
Try creating a service with a cluster IP. Then try accessing the service IP and port from within a target pod. It hangs. As far as I can tell, SYN but that is it.
Note that the same is true if you use type: NodePort and access the node IP and port.
What you expected to happen:
Should be able to access the service from any pod, whether one of the targets or a different one.
How to reproduce it (as minimally and precisely as possible):
Use the following simple service and deployment:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: 30100
selector:
instance: nginx
---
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
instance: nginx
template:
metadata:
labels:
name: nginx
instance: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
name: http
Working pod:
exec into any pod other than the one created as part of the deployment:
curl <nginx_podIP>:80: workscurl <nginx_serviceIP>:80: works
Failing pod
exec into the pod created by the above deployment:
curl <nginx_podIP>:80: workscurl localhost:80: workscurl <nginx_serviceIP>:80: FAILS
Anything else we need to know:
It doesn’t matter if you use the same port for the service as for the pod or a different one, same name or different name, etc.
I suspect iptables weirdness, but…
Running 1.6.2 with weave networking. I was concerned it might be lack of --cluster-cidr on the kube-proxy, but that makes no difference.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 48 (47 by maintainers)
Links to this issue
Commits related to this issue
- Create hairpin-check.yaml https://github.com/kubernetes/kubernetes/issues/45790 — committed to appscodelabs/tasty-kube by tamalsaha 7 years ago
/close
True. But do we want to make it too restrictive? Would we not want a more open approach that says, “you must support this or indicate clearly that you do not”?
In that case, you can release a network that does not support it, and let users decide if the trade-offs for a particular CNI plugin are worth it.
My searches say no, not yet.
In my view this change should have gone into the release notes (i.e. “stopped setting hairpin mode for non-kubenet veths”)
@deitch 1 - 4 sound accurate to me, but somebody should double-check 2 because that was only my vague recollection of what hairpin was meant to solve, and I forget if that is accurate or not. But other than that, yes.
@bboreham it might be more consistent, but it really does live in the network plugins IMHO and I don’t really think Kube should have anything to do with it. Ideally 😃
I tried disabling CRI on kubelet and that brought the hairpin-veth mode back.
Looking at the code, I see
hairpin.SetUpContainerPidcalled indockertools/docker_manager.go, but no similar call indockershim/docker_service.go. So I conclude that the CRI code just doesn’t have this functionality.