coredns: HINFO: unreachable backend: read udp 10.200.0.9:46159->183.60.83.19:53: i/o timeout

I use kubernetes v12, my system is ubuntu 16.

I use the followed command to create DNS resource.

wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed

wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh
bash deploy.sh -i 10.32.0.10 -r "10.32.0.0/24" -s -t coredns.yaml.sed | kubectl apply -f -

After created coredns resource: I check the resources status.

  1. check coredns service
kubectl get svc -n kube-system
Screenshot at Mar 14 11-46-45
  1. check code DNS pod endpoints
kubectl get ep -n kube-system
Screenshot at Mar 14 11-50-51
  1. My DNS config:
cat /etc/recolv.conf
Screenshot at Mar 14 13-35-28
  1. Check CoreDNS pod logs
Screenshot at Mar 14 13-37-41

I found CoreDNS pod ip cannot connected to node DNS server ip address.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 1
  • Comments: 26 (9 by maintainers)

Most upvoted comments

In my case with Debian 10, it was a iptables problem, using the legacy binary fixed the problem.

update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
update-alternatives --set arptables /usr/sbin/arptables-legacy
update-alternatives --set ebtables /usr/sbin/ebtables-legacy

Addressed to all Googlers

I created a convenience script that quickly and verbosely applies the solution suggested by @HydriaOne: https://github.com/theAkito/rancher-helpers/blob/master/scripts/debian-buster_fix.sh

Same problem in Debian 10 as well; using Flannel CNI.

In my case with Debian 10, it was a iptables problem, using the legacy binary fixed the problem.

update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
update-alternatives --set arptables /usr/sbin/arptables-legacy
update-alternatives --set ebtables /usr/sbin/ebtables-legacy

This solved the problem for me, too.

@vitorreis, Given that you are using the non-buggy version of busybox (1.28), and you cant resolve things like kubernetes.default, that suggests pod to service connection blockage within the cluster.

If there is no firewall, it could be a problem with your calico network plugin.

@chrisohaver

The google part isnt the important change. The important part is that you query your upstream server (213.186.33.99), not @spursy’s upstream server.

Thanks for the clarification.

I suspect that will also timeout/fail. If thats the case, it means your pods also cannot connect to the outside world

That’s true, no pods can resolve something like ping google.com, however if I run this in the node terminal I can ping successfully, only from the pods it doesn’t work.

Do you have a firewall blocking dns to/from pods?

I am running on a fresh machine Ubutu 18.10, no firewall is enabled AFAIK.

root@ubuntu:/home/ubuntu# ufw status
Status: inactive

Is there any other command that I can use to have a check if something is blocking dns from/to pods? Unfortunately I am not a linux expert.

@vitorreis, what version of busybox are you using?

Image: busybox:1.28

For me I get

kubectl run -it --rm --restart=Never busybox1 --image=busybox sh
If you don't see a command prompt, try pressing enter.
/ # nslookup baidu.com. 183.60.83.19
;; connection timed out; no servers could be reached

/ #