kubernetes: coredns [ERROR] plugin/errors: 2 read udp 10.244.235.249:55567->10.96.0.10:53: i/o timeout

What happened: coredns failed to resolve internal service and External URL occasionally

my app logs:

Traceback (most recent call last): File “/usr/local/lib/python2.7/dist-packages/redis/connection.py”, line 544, in connect raise ConnectionError(self._error_message(e)) ConnectionError: Error -3 connecting to o2o-redis-service.o2o-sales.svc.cluster.local:6379. Temporary failure in name resolution.

and coredns logs: kubectl logs -f coredns-b87f7894c-zcwvl -n kube-system

[ERROR] plugin/errors: 2 o2o-redis-service. A: read udp 10.244.169.134:47806->132.120.200.52:53: i/o timeout [ERROR] plugin/errors: 2 o2o-redis-service. AAAA: read udp 10.244.169.134:47954->132.120.200.49:53: i/o timeout [ERROR] plugin/errors: 2 o2o-redis-service. A: read udp 10.244.169.134:44703->132.120.200.49:53: i/o timeout [ERROR] plugin/errors: 2 elasticsearch. A: read udp 10.244.235.249:40671->132.120.200.49:53: i/o timeout [ERROR] plugin/errors: 2 elasticsearch. AAAA: read udp 10.244.235.249:33287->10.96.0.10:53: i/o timeout [ERROR] plugin/errors: 2 elasticsearch. A: read udp 10.244.235.249:44960->132.120.200.49:53: i/o timeout [ERROR] plugin/errors: 2 istio-galley.istio-system. AAAA: read udp 10.244.235.249:35208->132.120.200.49:53: i/o timeout [ERROR] plugin/errors: 2 12.145.97.132.in-addr.arpa. PTR: read udp 10.244.235.249:55567->10.96.0.10:53: i/o timeout

What you expected to happen: Resolve dns normally

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): [root@k8s-mix-176 ~]# kubectl version Client Version: version.Info{Major:“1”, Minor:“17”, GitVersion:“v1.17.0”, GitCommit:“70132b0f130acc0bed193d9ba59dd186f0e634cf”, GitTreeState:“clean”, BuildDate:“2019-12-07T21:20:10Z”, GoVersion:“go1.13.4”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“17”, GitVersion:“v1.17.0”, GitCommit:“70132b0f130acc0bed193d9ba59dd186f0e634cf”, GitTreeState:“clean”, BuildDate:“2019-12-07T21:12:17Z”, GoVersion:“go1.13.4”, Compiler:“gc”, Platform:“linux/amd64”}

  • Cloud provider or hardware configuration:

  • OS (e.g: cat /etc/os-release): [root@k8s-mix-176 ~]# cat /etc/os-release NAME=“CentOS Linux” VERSION=“7 (Core)” ID=“centos” ID_LIKE=“rhel fedora” VERSION_ID=“7” PRETTY_NAME=“CentOS Linux 7 (Core)” ANSI_COLOR=“0;31” CPE_NAME=“cpe:/o:centos:centos:7” HOME_URL=“https://www.centos.org/” BUG_REPORT_URL=“https://bugs.centos.org/

CENTOS_MANTISBT_PROJECT=“CentOS-7” CENTOS_MANTISBT_PROJECT_VERSION=“7” REDHAT_SUPPORT_PRODUCT=“centos” REDHAT_SUPPORT_PRODUCT_VERSION=“7”

  • Kernel (e.g. uname -a):

[root@k8s-mix-176 ~]# uname -r 4.4.206-1.el7.elrepo.x86_64

  • Install tools: kubeadm

  • Others: haproxy+keepalive HAP [root@k8s-mix-176 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-df9cc4476-89lpx 1/1 Running 0 6d19h calico-node-2bqpx 1/1 Running 0 6d19h calico-node-44v28 1/1 Running 0 6d19h calico-node-6rjxz 1/1 Running 1 6d17h calico-node-8x5rf 1/1 Running 0 6d19h calico-node-9gqhl 1/1 Running 0 6d19h calico-node-dc89d 1/1 Running 1 6d19h calico-node-dt76s 1/1 Running 0 6d19h calico-node-mvcpj 1/1 Running 1 6d17h calico-node-qzhlm 1/1 Running 0 6d19h calico-node-tfmv9 1/1 Running 3 6d18h calico-node-x9vfj 1/1 Running 1 6d17h coredns-b87f7894c-4sdzh 1/1 Running 0 39h coredns-b87f7894c-6pqqv 1/1 Running 1 6d15h coredns-b87f7894c-7ntdb 1/1 Running 0 7d8h coredns-b87f7894c-p567c 1/1 Running 0 7d8h coredns-b87f7894c-xcm5f 1/1 Running 0 39h coredns-b87f7894c-zcwvl 1/1 Running 1 6d15h etcd-k8s-mix-174 1/1 Running 0 7d7h etcd-k8s-mix-175 1/1 Running 0 7d7h etcd-k8s-mix-176 1/1 Running 0 7d8h kube-apiserver-k8s-mix-174 1/1 Running 0 7d7h kube-apiserver-k8s-mix-175 1/1 Running 0 7d7h kube-apiserver-k8s-mix-176 1/1 Running 0 7d8h kube-controller-manager-k8s-mix-174 1/1 Running 0 7d7h kube-controller-manager-k8s-mix-175 1/1 Running 0 7d7h kube-controller-manager-k8s-mix-176 1/1 Running 1 7d8h kube-proxy-428k6 1/1 Running 0 7d7h kube-proxy-5s725 1/1 Running 0 7d8h kube-proxy-6pdgx 1/1 Running 0 7d7h kube-proxy-gdqf9 1/1 Running 2 6d19h kube-proxy-gfppm 1/1 Running 1 6d17h kube-proxy-gk558 1/1 Running 2 6d17h kube-proxy-hjbn7 1/1 Running 0 7d7h kube-proxy-rtc4h 1/1 Running 0 7d7h kube-proxy-vn9jh 1/1 Running 3 6d18h kube-proxy-znllw 1/1 Running 0 7d7h kube-proxy-zpm6j 1/1 Running 1 6d17h kube-scheduler-k8s-mix-174 1/1 Running 0 7d7h kube-scheduler-k8s-mix-175 1/1 Running 0 7d7h kube-scheduler-k8s-mix-176 1/1 Running 1 7d8h

how to resolve this problem? thanks

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 26 (1 by maintainers)

Most upvoted comments

On our RHEL7 os using k8s version 1.20 installed via kubespray using calico and containerd, I was able to solved it by executing

iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -F

then deleting the coredns pods

I am facing the same issue. Will try the IP Tables Stuff.

But I must say @DeanYang121, it is said that people asking for help and then finding a solution never answer to the others when having same problem in same thread and asking explicitly YOU. Community is not a one way street!

J’ai le même problème.

Cluster Kubernetes 1.22 déployé avec kubespray.

Action ménée: iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -F

Exécutez ces commandes sur les nodes workers pour permettre le traffic du POD CoreDns vers l’hôte.

Enfin redémarrez le deployment CoreDns.

En vérifiant à nouveau les logs du POD CoreDns, tout sera OK

Hi @DeanYang121

By your logs, it seems you have two problems:

  • CoreDNS not being able to query kubernetes apiserver to resolve internal names
  • CoreDNS not being able to forward your queries to external DNS (132.120.200.49:53: i/o timeout)

Can you please check if CoreDNS is reaching both apiserver and your external DNS correctly? If not, can you try verifying if there’s some firewall between them (CentOS iptables, some network firewall, etc).

Tks