kubernetes: API Server not listening on cluster service IP when there's no default gateway

Is this a BUG REPORT or FEATURE REQUEST?: /kind bug

What happened: I can’t get Kubernetes cluster (v1.8.2) running in an air gapped environment that does not have a default gateway defined in routing table within the OS. When the k8s kubelet starts, I see the error:

Dec 18 19:22:51 server-001 kubelet[23357]: W1218 19:22:51.967098   23357 kubelet_node_status.go:973] Failed to set some node status fields: can't get ip address of node server-001. error: No default routes.

in /var/log/syslog and the API server is not listening on the service cluster network, which causes failures in other pods. If I create a dummy default route, all is good - k8s comes up clean, all pods are happy.

Route table:

root@server-001:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        10.69.255.79    255.0.0.0       UG    0      0        0 p0
10.68.0.0       0.0.0.0         255.252.0.0     U     0      0        0 p0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0

What you expected to happen: Kubernetes to work properly regardless of whether the default gateway is set or not.

How to reproduce it (as minimally and precisely as possible): Try to run Kubernetes on server that does not have a default gateway route defined. Then try to run a curl on the API extension server and will get a connection refused (curl -k https://172.16.0.1/)

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): v1.8.2
  • Cloud provider or hardware configuration: bare metal, Intel® Xeon® CPU E5-2683 v4 @ 2.10GHz, 64 cores, 512GB RAM
  • OS (e.g. from /etc/os-release): Ubuntu 16.04.3 LTS
  • Kernel (e.g. uname -a): Linux server-001 4.4.0-38-generic #57-Ubuntu SMP Tue Sep 6 15:42:33 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: Installed by custom Ansible script
  • Others: Calico v2.6.2

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 6
  • Comments: 40 (6 by maintainers)

Commits related to this issue

Most upvoted comments

Just ran into this as well, an environment where nodes are only connected to specific networks without any kind of default route. Took me quite a while to debug what was going on, despite the fact I ran into this exact same issue a couple of months ago.

Anyway, the work-around is quite simple: create a (any) valid route for the Service network on the host(s) such that requests to them get routed (and then intercepted by kube-proxy iptables rules) instead of ENETUNREACH, so e.g.

$ modprobe dummy
$ ip l set dummy0 up
$ ip r add 10.96.0.0/12 dev dummy0

I’m wondering, given this network should never be ‘really’ routable on the host (at least when using kube-proxy), would it make sense for kube-proxy to always create such interface and set up the appropriate route? (I tried using a blackhole route not to need the dummy0 interface, but that doesn’t work).

@kennethredler It is just my opinion based on the customer environments we are working with, and the problems we met and resolved.

If system cannot be configured with a default gateway, explicit routes probably should be configured for service ip range. Otherwise, user will meet the issue I mentioned above.

If there is any solution to make k8s working without configuring default gateway or explicit routes, I am more than happy to adopt.