minikube: Add cluster DNS to node resolv.conf (cannot pull image from cluster-internal host name)

Minikube version (use minikube version): v0.23.0

  • OS (e.g. from /etc/os-release): Linux xps 4.9.58 #1-NixOS SMP Sat Oct 21 15:21:39 UTC 2017 x86_64 GNU/Linux
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualbox
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): minikube-v0.23.6.iso

What happened:

Failed to pull image "docker-registry-luminous-parrot:4000/todo-list@sha256:c3fb64353659cad2e6e96af7b6d5e3e58340af74108a3e2b663f6df77debd872": rpc error: code = Unknown desc = Error response from daemon: Get https://docker-registry-luminous-parrot:4000/v2/: dial tcp: lookup docker-registry-luminous-parrot on 10.0.2.3:53: no such host

even though this service is available:

screenshot_2017-11-06_14-23-05

when I ssh into a pod in the same namespace:

# nslookup docker-registry-luminous-parrotServer:		10.0.0.10
Address:	10.0.0.10#53

Name:	docker-registry-luminous-parrot.default.svc.cluster.local
Address: 10.0.0.178

when I read /etc/resolv.conf from minikube:

$ minikube ssh
$ cat /etc/resolv.conf
nameserver 10.0.2.3

It looks like minikube has the wrong dns server. 10.0.0.10 finds the service correctly.

What you expected to happen:

I expect kubernetes to be able to pull the image based on that registry host name.

How to reproduce it (as minimally and precisely as possible):

minikube start --insecure-registry 10.0.0.0/24 --disk-size 60g
helm init
helm install incubator/docker-registry
# push an image to the registry
# try to create a deployment with the image using the registry

About this issue

  • Original URL
  • State: open
  • Created 7 years ago
  • Reactions: 2
  • Comments: 23 (3 by maintainers)

Most upvoted comments

Here’s my solution that doesn’t use the (now removed) kube-dns addon and doesn’t use a hardcoded IP address.

Run this script from outside Minikube after every Minikube startup:

DNS=$(kubectl get service/kube-dns --namespace kube-system --template '{{.spec.clusterIP}}')
CONFIGURED=$(echo "[Resolve]\nDNS=$DNS" | base64)
CURRENT=$(minikube ssh "cat /etc/systemd/resolved.conf | base64" | tr -d "\r")
if [ "$CURRENT" != "$CONFIGURED" ]; then
  minikube ssh "echo $CONFIGURED | base64 --decode | sudo tee /etc/systemd/resolved.conf"
  minikube ssh "sudo systemctl restart systemd-resolved --wait"
  echo "Configured and restarted"
else
  echo "Already configured"
fi

I wonder if this is something that’d make sense as default Minikube behaviour?

I too have an issue with this. A clean setup ends up with the DNS set to 10.0.2.3 which is not the ip of the kube-dns service (10.96.0.10). Any idea why this is happening?

as a test, on minikube host, I updated /etc/systemd/resolvd.conf, adding

DNS=10.0.0.10

and then did systemctl restart systemd-resolved.

on minikube host:

$ nslookup docker-registry-luminous-parrot
Server:    10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'docker-registry-luminous-parrot'

in a pod:

# nslookup docker-registry-luminous-parrot
Server:		10.0.0.10
Address:	10.0.0.10#53

Name:	docker-registry-luminous-parrot.default.svc.cluster.local
Address: 10.0.0.178

I still have this issue that I can push to the internal registry but I can’t use that image within a Deployment:

  Warning  Failed     4h6m (x2 over 4h6m)  kubelet, minikube  Failed to pull image "registry.kube-system.svc.cluster.local/k8spatterns/random-generator": rpc error: code = Unknown desc = Error response from daemon: Get https://registry.kube-system.svc.cluster.local/v2/: dial tcp: lookup registry.kube-system.svc.cluster.local on 192.168.64.1:53: no such host

Actually, my question is, how is the registry addon supposed to work? Are images stored in this registry supposed to work as images of a Pod?

/remove-lifecycle rotten.

I also needed to set:

VBoxManage modifyvm “permanent” --natdnshostresolver1 on

on the VM. I think this may be related to dnsmasq running on the host.

I still see this issue with minikube v0.28.0 and kube 1.10.0.

Combining @andrewrk and @reymont solutions worked for me as a workaround.

/remove-lifecycle stale