minikube: DNS not working inside minikube pods since 23.6

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Bug report

Please provide the following details:

Environment:

Minikube version (use minikube version): v0.24.1

  • OS (e.g. from /etc/os-release): Mac OS 10.12.6
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualbox
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): v0.23.6.iso
  • Install tools: Homebrew
  • Others: Kubernetes version 1.7.5, containers built using Alpine 3.6

What happened:

I first noticed this issues when I wasn’t able to reach Dockerhub inside a pod, but it looks like DNS is failing globally. I can ping 8.8.8.8, but if I try to reach anything using DNS, it fails.

/networking # nslookup registry-1.docker.io
nslookup: can't resolve '(null)': Name does not resolve
/networking # nslookup google.com
nslookup: can't resolve '(null)': Name does not resolve
/ # cat /etc/resolv.conf
nameserver 10.0.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

Pinging 10.0.0.10 also fails to connect.

This used to work when I was on version .21. It’s broken since I upgraded to .23.6 and .24.1.

What you expected to happen:

DNS to work.

How to reproduce it (as minimally and precisely as possible):

Start a pod using this yaml with the API version added:

kind: Pod
metadata:
  generateName: networking-
  labels:
    test: test
spec:
  containers:
  - image: ceridwen/networking:v1
    imagePullPolicy: Always
    name: networking
    readinessProbe:
      tcpSocket:
        port: 5000
      initialDelaySeconds: 5
      periodSeconds: 1
    restartPolicy: Always

I also saw this using the docker:stable-dind image.

Output of minikube logs (if applicable):

I didn’t see anything in the logs that looked relevant, I can post them if needed.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 20
  • Comments: 29 (5 by maintainers)

Most upvoted comments

I’m still seeing the same problem with: Mac OS: Darwin 10.15.7 Minikube: v1.18.1 Kubernetes: v1.20.2 Docker 20.10.3 VirtualBox: 6.1.18 r142142

Same issue:

Linux MInt 18.3 Minikube: 0.25.0 Kubernetes: v1.9.2

I’m haveing the same issue in Windows 10.

Minikube: 0.25.0 Kubernetes: v1.9.0

💯 it has to do with pulling the latest version of the minikube iso (0.23.6). Downgrading to 0.23.5 we no longer have this issue.

2 of us at our company are able to reproduce on Mac OSX 10.12.6 using k8s v1.8.0.

~Doesn’t matter which bootstrapper we use (kubeadm or localkube) or which VM driver.~ We’ve tried every possible combo of both.

EDIT: It does work with kubeadm and the default Virtualbox driver.

You can specify the actual Minikube ISO version like so:

--iso-url=https://storage.googleapis.com/minikube/iso/minikube-v0.23.5.iso

As an aside, any reason the ISO semantic version is different than the minikube semantic version? That through us for a loop as well… 😄

I was facing the same problem and solved with this:

$ kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
  • minikube version: v0.25.0
  • kubernetes version: 1.8
  • kubectl version: 1.9.2
  • mac osx: 10.13.4

I was getting CrashLoopBackOff for kube-dns-7bb84f958d-6sglb pod and these logs:

$ kubectl logs kube-dns-7bb84f958d-6sglb -n kube-system -c kubedns
I0426 13:47:09.970362       1 dns.go:48] version: 1.14.4-2-g5584e04
I0426 13:47:09.971049       1 server.go:66] Using configuration read from ConfigMap: kube-system:kube-dns
I0426 13:47:09.971091       1 server.go:113] FLAG: --alsologtostderr="false"
I0426 13:47:09.971101       1 server.go:113] FLAG: --config-dir=""
I0426 13:47:09.971106       1 server.go:113] FLAG: --config-map="kube-dns"
I0426 13:47:09.971109       1 server.go:113] FLAG: --config-map-namespace="kube-system"
I0426 13:47:09.971112       1 server.go:113] FLAG: --config-period="10s"
I0426 13:47:09.971117       1 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I0426 13:47:09.971121       1 server.go:113] FLAG: --dns-port="10053"
...
I0426 13:47:13.986287       1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
E0426 13:47:13.987510       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:default" cannot list endpoints at the cluster scope
E0426 13:47:13.987889       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:default" cannot list services at the cluster scope
E0426 13:47:13.989914       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
...

I hope this helps.

DNS broken for me as well after upgrading to minikube 0.24.1 (probably broke in a preceding version) on macOS 10.12.6.

Solved by recreating the cluster by minikube delete (warning: this will remove all of your data) followed by a minikube start.

I can reproduce the issue with k8s v1.7.5 on Linux… It doesn’t seem to be a Mac-only issue.

I have the same issue:

minikube version: v0.29.0 kubernetes version: 1.10.0 kubectl version: 1.12.0 Ubuntu 18.04.1 LTS VM Driver: none kube-dns addon: enabled

I have a CrashLoopBackOff of coredns with this log:


.:53
2018/09/28 14:54:48 [INFO] CoreDNS-1.2.2
2018/09/28 14:54:48 [INFO] linux/amd64, go1.11, eb51e8b
CoreDNS-1.2.2
linux/amd64, go1.11, eb51e8b
2018/09/28 14:54:48 [INFO] plugin/reload: Running configuration MD5 = 486384b491cef6cb69c1f57a02087363
2018/09/28 14:54:48 [FATAL] plugin/loop: Seen "HINFO IN 9173194449250285441.798654804648663467." more than twice, loop detected

I completely tear down and recreate for my use case and still have an issue.