coredns: Error: unreachable backend (use in kubernetes)

I have error in pod of kubernetes:

[ERROR] 2 2776621349124840513.8630579569398302643. HINFO: unreachable backend: no upstream host
[ERROR] 2 2776621349124840513.8630579569398302643. HINFO: unreachable backend: no upstream host

How can i fix it?

Thanks

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 57 (22 by maintainers)

Most upvoted comments

I experienced the same problem and after looking around I noticed that the DNS requests from the endpoints would not be routed via the master interface to the Internet and all my DNS requests would timeout.

I found that IP forwarding was not working for me even though I had net.ipv4.ip_forward = 1 properly across my nodes and master. After I rebooted my servers, I could see the DNS request being properly routed via the interface and NATed to the interface IP.

This resolved it for me at least

I am having the same problem…

I know i can edit coredns --> ConfigMap to solve but

another way i think it may be two ways to thought

1、Coredns location machine cannot access your location machine DNS server maybe firewall 2、I guess coredns maybe inherit location machine DNS server configuration I don’t konw what it happend so i change my coredns ConfigMap to this

.:53 {
    errors
    health
    kubernetes cluster.local in-addr.arpa ip6.arpa {
         pods  insecure
        upstream
      fallthrough in-addr.arpa ip6.arpa
    }
    prometheus :9153
    proxy . 8.8.8.8 **other upstream dnsserver**
    cache 30
    loop
    reload
    loadbalance
}

although i solve it but i solve it but i think i set it must No problem

.:53 {
    errors
    health
    kubernetes cluster.local in-addr.arpa ip6.arpa {
         pods  insecure
        upstream
      fallthrough in-addr.arpa ip6.arpa
    }
    prometheus :9153
    proxy . /etc/reslove
    cache 30
    loop
    reload
    loadbalance
}

beacuse i coredns location machine DNS server configuration no error . can access upstream dnsserver . it set

nameserver 8.8.8.8

last i think coredns not inherit location machine DNS configuration .

my kubenetes is 1.13.4

 kubectl logs coredns-7655b945bc-cgqnp -n kube-system
.:53
2019-07-03T09:45:13.311Z [INFO] CoreDNS-1.2.6
2019-07-03T09:45:13.311Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
 [ERROR] plugin/errors: 2 www.bing.com. A: unreachable backend: read udp 10.224.1.7:33989->8.8.8.8:53: i/o timeout
 [ERROR] plugin/errors: 2 www.bing.com. A: unreachable backend: read udp 10.224.1.7:59676->8.8.8.8:53: i/o timeout
 [ERROR] plugin/errors: 2 www.bing.com. AAAA: unreachable backend: read udp 10.224.1.7:40157->8.8.8.8:53: i/o timeout

hope it will be solve

I am having the same problem… Did anyone find the solution?

This didn’t help: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/

kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:35:32Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

Busybox unable to resolve:

Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1

My /etc/resolv.conf/

nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

My DNS1 POds

NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE    IP              NODE         NOMINATED NODE   READINESS GATES
default       busybox                                 1/1     Running   3          3h7m   10.244.1.2      worker1      <none>           <none>
kube-system   coredns-6f77d8f5b8-cmtnt                1/1     Running   0          3h9m   10.244.0.3      k8s-master   <none>           <none>
kube-system   coredns-6f77d8f5b8-vp5pv                1/1     Running   0          3h9m   10.244.0.2      k8s-master   <none>           <none>

DNS Logs

.:53
2019-04-18T23:21:52.287Z [INFO] CoreDNS-1.2.6
2019-04-18T23:21:52.287Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
 [ERROR] plugin/errors: 2 5370693451807057818.84892023779323983. HINFO: unreachable backend: read udp 10.244.0.3:45297->8.8.4.4:53: i/o timeout
 ...
 [ERROR] plugin/errors: 2 5370693451807057818.84892023779323983. HINFO: unreachable backend: read udp 10.244.0.3:45588->8.8.4.4:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. AAAA: unreachable backend: read udp 10.244.0.3:51636->8.8.4.4:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. AAAA: unreachable backend: read udp 10.244.0.3:55863->8.8.4.4:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. AAAA: unreachable backend: read udp 10.244.0.3:57987->8.8.8.8:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. A: unreachable backend: read udp 10.244.0.3:50425->8.8.4.4:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. A: unreachable backend: read udp 10.244.0.3:51464->8.8.8.8:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. A: unreachable backend: read udp 10.244.0.3:53587->8.8.4.4:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. AAAA: unreachable backend: read udp 10.244.0.3:55392->8.8.8.8:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. AAAA: unreachable backend: read udp 10.244.0.3:43061->8.8.4.4:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. A: unreachable backend: read udp 10.244.0.3:46036->8.8.8.8:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. A: unreachable backend: read udp 10.244.0.3:42025->8.8.8.8:53: i/o timeout

My DNS2 POD logs

.:53
2019-04-18T23:21:52.275Z [INFO] CoreDNS-1.2.6
2019-04-18T23:21:52.275Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
 [ERROR] plugin/errors: 2 5198613086963255171.4583969612912105000. HINFO: unreachable backend: read udp 10.244.0.2:41052->8.8.4.4:53: i/o timeout
...
 [ERROR] plugin/errors: 2 5198613086963255171.4583969612912105000. HINFO: unreachable backend: read udp 10.244.0.2:53219->8.8.4.4:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. AAAA: unreachable backend: read udp 10.244.0.2:51052->8.8.8.8:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. A: unreachable backend: read udp 10.244.0.2:46528->8.8.4.4:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. AAAA: unreachable backend: read udp 10.244.0.2:56840->8.8.8.8:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. AAAA: unreachable backend: read udp 10.244.0.2:46829->8.8.4.4:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. A: unreachable backend: read udp 10.244.0.2:34355->8.8.4.4:53: i/o timeout
 [ERROR] plugin/errors: 2 kubernetes.default. A: unreachable backend: read udp 10.244.0.2:55600->8.8.8.8:53: i/o timeout

My Busybox version

# cat busybox.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: docker-repository:8082/busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

Any suggestion? @isen-ng my firewall is disabled

@chrisohaver I’m having a similar issue. I have a CentOS 7 master and a WS2019 worker, both running as hyper-v vms on a win 10 1809 connected to a hyper-v bridge switch, running with flannel host-gw built from this MS guide. When firewalld is enabled, dns resolution from the worker pods is blocked. Disable firewalld and suddenly internal and external resolution starts working. I need firewalld on however and adding dns, 53/udp to the active zone isn’t working. When blocked, coredns indicates similar messages of [ERROR] plugin/errors: 0 x.x. HINFO IN: unreachable backend: read udp 10.244.0.19:35856-><my-upstream-DNS-IP>:53: i/o timeout

Everything else about my cluster seems to be working. Can you suggest some setting for getting firewalld to be compatible?

@kolesoffac got the same problem, did you manage to fix it?

@matinjugou Did you set the --pod-network-cidr to the same subnet that your DNS server was on? For example, a DNS server of 192.168.0.7 and --pod-network-cidr=192.168.0.0/24. It seems that my DNS works with a public resolver, but not a private one.

i’m having the same error `[root@ip-10-0-1-244 centos]# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE default hello-65f99d55dd-8stz9 1/1 Running 0 39m 10.32.0.4 ip-10-0-1-244.ec2.internal <none> kube-system coredns-576cbf47c7-6v5lj 1/1 Running 0 5m13s 10.244.0.148 ip-10-0-1-244.ec2.internal <none> kube-system coredns-576cbf47c7-bgrqd 1/1 Running 0 5m13s 10.244.0.149 ip-10-0-1-244.ec2.internal <none> kube-system etcd-ip-10-0-1-244.ec2.internal 1/1 Running 0 49m 10.0.1.244 ip-10-0-1-244.ec2.internal <none> kube-system kube-apiserver-ip-10-0-1-244.ec2.internal 1/1 Running 0 49m 10.0.1.244 ip-10-0-1-244.ec2.internal <none> kube-system kube-controller-manager-ip-10-0-1-244.ec2.internal 1/1 Running 0 49m 10.0.1.244 ip-10-0-1-244.ec2.internal <none> kube-system kube-flannel-ds-amd64-bpv8r 1/1 Running 0 21m 10.0.1.244 ip-10-0-1-244.ec2.internal <none> kube-system kube-proxy-745wg 1/1 Running 0 50m 10.0.1.244 ip-10-0-1-244.ec2.internal <none> kube-system kube-scheduler-ip-10-0-1-244.ec2.internal 1/1 Running 0 49m 10.0.1.244 ip-10-0-1-244.ec2.internal <none> kube-system kubernetes-dashboard-67d4c89764-zfc7s 1/1 Running 0 35m 10.32.0.6 ip-10-0-1-244.ec2.internal <none> kube-system tiller-deploy-845cffcd48-zwdz7 1/1 Running 0 36m 10.32.0.5 ip-10-0-1-244.ec2.internal <none> storageos storageos-storageos-wlhhz 1/1 Running 0 35m 10.0.1.244 ip-10-0-1-244.ec2.internal <none> tempus cassandra-0 0/1 Pending 0 24m <none> <none> <none> tempus nifi-0 0/1 Pending 0 24m <none> <none> <none> tempus postgresql-0 0/1 Pending 0 24m <none> <none> <none> tempus redtail-api-discovery-5cd5df744b-m56kc 1/1 Running 0 24m 10.32.0.7 ip-10-0-1-244.ec2.internal <none> tempus redtail-identity-service-54757d6799-qz2wh 1/1 Running 0 24m 10.32.0.8 ip-10-0-1-244.ec2.internal <none> tempus redtail-metadata-api-578498db9d-txvc8 1/1 Running 0 24m 10.32.0.9 ip-10-0-1-244.ec2.internal <none> tempus tempus-0 1/1 Running 0 24m 10.32.0.10 ip-10-0-1-244.ec2.internal <none> tempus zk-0 0/1 Pending 0 24m <none> <none> <none> tempus zk-1 0/1 Pending 0 24m <none> <none> <none> tempus zk-2 0/1 Pending 0 24m <none> <none> <none> [root@ip-10-0-1-244 centos]# kubectl describe pvc -n tempus cassandra-commitlog-cassandra-0 Name: cassandra-commitlog-cassandra-0 Namespace: tempus StorageClass: tempus Status: Pending Volume:
Labels: app=cassandra Annotations: volume.beta.kubernetes.io/storage-class: tempus volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/storageos Finalizers: [kubernetes.io/pvc-protection] Capacity:
Access Modes:
Events: Type Reason Age From Message


Warning ProvisioningFailed 61s (x18 over 24m) persistentvolume-controller Failed to provision volume with StorageClass “tempus”: invalid node format: lookup storageos on 10.0.0.2:53: no such host Mounted By: cassandra-0 [root@ip-10-0-1-244 centos]# kubectl logs -f -n kube-system coredns-576cbf47c7-6v5lj .:53 2018/11/21 06:48:16 [INFO] CoreDNS-1.2.2 2018/11/21 06:48:16 [INFO] linux/amd64, go1.11, eb51e8b CoreDNS-1.2.2 linux/amd64, go1.11, eb51e8b 2018/11/21 06:48:16 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769 2018/11/21 06:48:19 [ERROR] 2 2535257455848830884.6326319755660942605. HINFO: unreachable backend: no upstream host 2018/11/21 06:48:36 [ERROR] 2 2535257455848830884.6326319755660942605. HINFO: unreachable backend: read udp 10.244.0.148:56145->10.96.0.10:53: i/o timeout 2018/11/21 06:48:41 [ERROR] 2 2535257455848830884.6326319755660942605. HINFO: unreachable backend: read udp 10.244.0.148:43651->10.96.0.10:53: i/o timeout 2018/11/21 06:48:46 [ERROR] 2 2535257455848830884.6326319755660942605. HINFO: unreachable backend: read udp 10.244.0.148:38455->10.96.0.10:53: i/o timeout 2018/11/21 06:48:47 [ERROR] 2 cassandra-headless.ec2.internal. AAAA: unreachable backend: read udp 10.244.0.148:51180->10.0.0.2:53: i/o timeout 2018/11/21 06:48:47 [ERROR] 2 cassandra-headless.ec2.internal. A: unreachable backend: read udp 10.244.0.148:58177->10.96.0.10:53: i/o timeout 2018/11/21 06:48:51 [ERROR] 2 2535257455848830884.6326319755660942605. HINFO: unreachable backend: no upstream host 2018/11/21 06:48:51 [ERROR] 2 2535257455848830884.6326319755660942605. HINFO: unreachable backend: read udp 10.244.0.148:48568->10.96.0.10:53: i/o timeout 2018/11/21 06:48:52 [ERROR] 2 cassandra-headless.ec2.internal. A: unreachable backend: read udp 10.244.0.148:48920->10.96.0.10:53: i/o timeout 2018/11/21 06:48:52 [ERROR] 2 cassandra-headless.ec2.internal. AAAA: unreachable backend: read udp 10.244.0.148:47038->10.0.0.2:53: i/o timeout 2018/11/21 06:48:52 [ERROR] 2 cassandra-headless.ec2.internal. A: unreachable backend: read udp 10.244.0.148:55692->8.8.8.8:53: i/o timeout 2018/11/21 06:48:56 [ERROR] 2 2535257455848830884.6326319755660942605. HINFO: unreachable backend: read udp 10.244.0.148:46868->10.0.0.2:53: i/o timeout 2018/11/21 06:48:56 [ERROR] 2 2535257455848830884.6326319755660942605. HINFO: unreachable backend: read udp 10.244.0.148:43003->8.8.8.8:53: i/o timeout 2018/11/21 06:48:57 [ERROR] 2 cassandra-headless.ec2.internal. A: unreachable backend: read udp 10.244.0.148:51834->10.96.0.10:53: i/o timeout 2018/11/21 06:48:57 [ERROR] 2 cassandra-headless.ec2.internal. A: unreachable backend: read udp 10.244.0.148:36531->10.96.0.10:53: i/o timeout 2018/11/21 06:48:57 [ERROR] 2 cassandra-headless.ec2.internal. A: unreachable backend: read udp 10.244.0.148:44311->10.96.0.10:53: i/o timeout 2018/11/21 06:49:01 [ERROR] 2 2535257455848830884.6326319755660942605. HINFO: unreachable backend: read udp 10.244.0.148:52282->10.96.0.10:53: i/o timeout 2018/11/21 06:49:01 [ERROR] 2 2535257455848830884.6326319755660942605. HINFO: unreachable backend: read udp 10.244.0.148:53863->10.96.0.10:53: i/o timeout 2018/11/21 06:49:02 [ERROR] 2 cassandra-headless.ec2.internal. A: unreachable backend: read udp 10.244.0.148:34180->8.8.8.8:53: i/o timeout 2018/11/21 06:49:02 [ERROR] 2 cassandra-headless.ec2.internal. AAAA: unreachable backend: read udp 10.244.0.148:39745->10.0.0.2:53: i/o timeout 2018/11/21 06:49:02 [ERROR] 2 cassandra-headless.ec2.internal. A: unreachable backend: read udp 10.244.0.148:48394->10.0.0.2:53: i/o timeout 2018/11/21 06:49:02 [ERROR] 2 cassandra-headless.ec2.internal. A: unreachable backend: read udp 10.244.0.148:34171->8.8.8.8:53: i/o timeout 2018/11/21 06:49:07 [ERROR] 2 cassandra-headless.ec2.internal. A: unreachable backend: read udp 10.244.0.148:33419->10.96.0.10:53: i/o timeout 2018/11/21 06:49:12 [ERROR] 2 cassandra-headless.ec2.internal. A: unreachable backend: read udp 10.244.0.148:45159->10.96.0.10:53: i/o timeout 2018/11/21 06:49:12 [ERROR] 2 cassandra-headless.ec2.internal. A: `

resove.conf on host: [root@ip-10-0-1-244 centos]# cat /etc/resolv.conf ; generated by /usr/sbin/dhclient-script search ec2.internal nameserver 10.0.0.2 nameserver 8.8.8.8 nameserver 10.96.0.10 coredns file: apiVersion: v1 data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 proxy . /etc/resolv.conf cache 30 loop reload loadbalance } kind: ConfigMap metadata: creationTimestamp: 2018-11-21T05:24:06Z name: coredns namespace: kube-system resourceVersion: “218” selfLink: /api/v1/namespaces/kube-system/configmaps/coredns uid: aa972781-ed4d-11e8-a082-0ebc752c6d2a