minikube: dnsmasq pod CrashLoopBackOff
BUG REPORT:
Environment:
Minikube version: v0.22.2
- OS: Ubuntu 16.04
- VM Driver: none
- Install tools: ???
- Others:
- Kubernetes version: v1.7.5
What happened:
Trying to resolve URLs won’t work, for example connecting to the GH API from a pod returns: Error: getaddrinfo EAI_AGAIN api.github.com.
What you expected to happen: Resolve an URL.
How to reproduce it (as minimally and precisely as possible):
sudo minikube start --vm-drive=nonekubectl create -f busybox.yaml(busyboxy from k8s docs)kubectl exec -ti busybox -- nslookup kubernetes.default
Returns:
Server: 10.0.0.10
Address 1: 10.0.0.10
nslookup: can't resolve 'kubernetes.default'
Output of minikube logs (if applicable):
⚠️ It looks like dnsmaq is failing to start, tail from minikube logs:
Oct 03 16:34:18 glooming-asteroid localkube[26499]: I1003 16:34:18.653793 26499 kuberuntime_manager.go:457] Container {Name:dnsmasq Image:gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 Command:[] Args:[-v=2 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=true -- -k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:150 scale:-3} d:{Dec:<nil>} s:150m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:kube-dns-config ReadOnly:false MountPath:/etc/k8s/dns/dnsmasq-nanny SubPath:} {Name:default-token-dkjg2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/dnsmasq,Port:10054,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Oct 03 16:34:18 glooming-asteroid localkube[26499]: I1003 16:34:18.653979 26499 kuberuntime_manager.go:741] checking backoff for container "dnsmasq" in pod "kube-dns-910330662-r7x4d_kube-system(7b11e42b-a79a-11e7-b83c-0090f5ed1486)"
Oct 03 16:34:18 glooming-asteroid localkube[26499]: I1003 16:34:18.654088 26499 kuberuntime_manager.go:751] Back-off 5m0s restarting failed container=dnsmasq pod=kube-dns-910330662-r7x4d_kube-system(7b11e42b-a79a-11e7-b83c-0090f5ed1486)
Oct 03 16:34:18 glooming-asteroid localkube[26499]: E1003 16:34:18.654121 26499 pod_workers.go:182] Error syncing pod 7b11e42b-a79a-11e7-b83c-0090f5ed1486 ("kube-dns-910330662-r7x4d_kube-system(7b11e42b-a79a-11e7-b83c-0090f5ed1486)"), skipping: failed to "StartContainer" for "dnsmasq" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=dnsmasq pod=kube-dns-910330662-r7x4d_kube-system(7b11e42b-a79a-11e7-b83c-0090f5ed1486)"
Oct 03 16:34:32 glooming-asteroid localkube[26499]: I1003 16:34:32.653745 26499 kuberuntime_manager.go:457] Container {Name:dnsmasq Image:gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 Command:[] Args:[-v=2 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=true -- -k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:150 scale:-3} d:{Dec:<nil>} s:150m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:kube-dns-config ReadOnly:false MountPath:/etc/k8s/dns/dnsmasq-nanny SubPath:} {Name:default-token-dkjg2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/dnsmasq,Port:10054,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Oct 03 16:34:32 glooming-asteroid localkube[26499]: I1003 16:34:32.653993 26499 kuberuntime_manager.go:741] checking backoff for container "dnsmasq" in pod "kube-dns-910330662-r7x4d_kube-system(7b11e42b-a79a-11e7-b83c-0090f5ed1486)"
Oct 03 16:34:32 glooming-asteroid localkube[26499]: I1003 16:34:32.654136 26499 kuberuntime_manager.go:751] Back-off 5m0s restarting failed container=dnsmasq pod=kube-dns-910330662-r7x4d_kube-system(7b11e42b-a79a-11e7-b83c-0090f5ed1486)
Oct 03 16:34:32 glooming-asteroid localkube[26499]: E1003 16:34:32.654174 26499 pod_workers.go:182] Error syncing pod 7b11e42b-a79a-11e7-b83c-0090f5ed1486 ("kube-dns-910330662-r7x4d_kube-system(7b11e42b-a79a-11e7-b83c-0090f5ed1486)"), skipping: failed to "StartContainer" for "dnsmasq" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=dnsmasq pod=kube-dns-910330662-r7x4d_kube-system(7b11e42b-a79a-11e7-b83c-0090f5ed1486)"
Oct 03 16:34:44 glooming-asteroid localkube[26499]: I1003 16:34:44.654035 26499 kuberuntime_manager.go:457] Container {Name:dnsmasq Image:gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 Command:[] Args:[-v=2 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=true -- -k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:150 scale:-3} d:{Dec:<nil>} s:150m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:kube-dns-config ReadOnly:false MountPath:/etc/k8s/dns/dnsmasq-nanny SubPath:} {Name:default-token-dkjg2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/dnsmasq,Port:10054,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Oct 03 16:34:44 glooming-asteroid localkube[26499]: I1003 16:34:44.654621 26499 kuberuntime_manager.go:741] checking backoff for container "dnsmasq" in pod "kube-dns-910330662-r7x4d_kube-system(7b11e42b-a79a-11e7-b83c-0090f5ed1486)"
Oct 03 16:34:44 glooming-asteroid localkube[26499]: I1003 16:34:44.655048 26499 kuberuntime_manager.go:751] Back-off 5m0s restarting failed container=dnsmasq pod=kube-dns-910330662-r7x4d_kube-system(7b11e42b-a79a-11e7-b83c-0090f5ed1486)
Anything else do we need to know:
Some troubleshooting commands with output:
kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
kube-dns-910330662-r7x4d 3/3 Running 11 20h
kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 20h
kubernetes-dashboard 10.0.0.193 <nodes> 80:30000/TCP 20h
⚠️ Endpoint is empty: kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 20h
Tail from kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq
I1003 14:43:04.930205 263 nanny.go:108] dnsmasq[280]: Maximum number of concurrent DNS queries reached (max: 150)
I1003 14:43:14.948913 263 nanny.go:108] dnsmasq[280]: Maximum number of concurrent DNS queries reached (max: 150)
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 9
- Comments: 24 (8 by maintainers)
Ok, I think I’ve found the reason and the solution.
It’s my
/etc/resolv.conf, in Ubuntu 17.04 it contains:I’ve ran:
and edited
/etc/resolv.confto contain only:After that the cluster and DNS works!
kubectl get pods --namespace=kube-system -l k8s-app=kube-dnskubectl exec -ti busybox -- nslookup kubernetes.defaultI’m not sure if it’s a workaround or a solution though. The
/etc/resolv.confthing might be default for Ubuntu. Should the none driver work with the original configuration or should the configuration be changed?There’s also a solution which doesn’t involve changing host system. Instead you can disable using host’s
resolv.confby applying the following config map (details: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configmap-options):It affects me too. DNS works when using the
Virtualboxdriver and doesn’t work when using thenonedriver.System:
Nothing fancy with networking or DNS as far as I know
cat /etc/resolv.confI’m using the newest minikube from Github.
kubectl exec -ti busybox -- nslookup kubernetes.defaulthangs with:kubednscontainer seems to start correctly, the logs are the same as invirtuabloxversion.dnsmasqseems to start correctly and after a while I see an info about reached limit:Logs from
sidecarcontainer:What’s interesting is that switching to
corednshelps as it works but with errors!kubectl exec -ti busybox -- nslookup kubernetes.defaultLogs from
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=coredns -o name):When those logs happen
nslookuptakes longer, but it returns the correct result.kubectl exec -ti busybox -- nslookup monitoring-grafana.kube-systemSo
kubednsdoesn’t work at all andcorednsworks, but it’s not stable. I’ll test with kubeadm bootstrapper instead of localkube to see how things go.Edit: kubeadm with none driver doesn’t seem to work, the cluster doesn’t start. I guess it’s too many of experimental features activated together 😃 kubeadm generates certificates for 127.0.0.1, 10.0.0.1, but the components try to use my eth0 interface’s IP: 192.168.42.13.
sudo minikube logsHmm. This solution is not working for me. I’m also facing the same issue. Unfortunately, none of the solution presented here is working for me!
Environment:
Minikube version: v0.25
OS: Ubuntu 18.04 VM Driver: none
As a workaround, you may try deploying coredns instead of kube-dns. If you do, take care to disable kube-dns in the add on manager (“minikube addons disable kube-dns”).
Deployment: https://github.com/coredns/deployment/tree/master/kubernetes