kubernetes: Connections to service ClusterIP with no endpoints are not rejected
What happened: When making a TCP connection to a service’s ClusterIP when there are no endpoints in the service the connection remains in SYN_WAIT state until an external timeout occurs (possibly indefinitely).
What you expected to happen: This connection should be immediately rejected and consequently closed on the client side.
How to reproduce it (as minimally and precisely as possible): I am not certain which details about my environment cause this behavior, but the environment is detailed below with as much information as I can think of. Beyond that I simply create a service definition as follows:
apiVersion: v1
kind: Service
metadata:
name: test-svc
namespace: default
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8012
selector:
fake-selector: foo
type: ClusterIP
I then curl this service’s ClusterIP from another pod:
# curl test-svc.default.svc.cluster.local
And observe the connection hang for an extended period.
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version): 1.14.3 - Cloud provider or hardware configuration:
- OS (e.g:
cat /etc/os-release): Debian 9 - Kernel (e.g.
uname -a): Linux dev 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1 (2019-04-12) x86_64 GNU/Linux - Install tools: Kubeadm 1.14.3
- Network plugin and version (if this is a network-related bug): calico (https://docs.projectcalico.org/v3.7/manifests/calico.yaml)
- Others:
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 37 (26 by maintainers)
@caseydavenport is working on a fix in Calico here: https://github.com/projectcalico/felix/pull/2417
@hakman I am slightly wary of changing the insert mode to “append” as a final solution to this - it opens up a window for other iptables users to bypass network policy that would otherwise be enforced by Calico, which isn’t great. In terms of rule ordering, it looks to me like putting the CALI rules after the KUBE-* rules is fine because the KUBE-* rules only act on connections that have already gone through iptables once. The issue, rather, is that other iptables users besides kube-proxy can now bypass policy by inserting their own rules.
I think an ideal solution would be to make Calico service aware (or at least kube-proxy aware) so that it could reject these connection attempts in its own rules, but if this is really causing a problem for you then using “append” mode is OK so long as you’re aware of the above risk.
Try to add;
in the env for the
calico-node.