k3d: [BUG] DNS not resolving
What did you do?
- How was the cluster created?
k3d create -n mycluster
- What did you do afterwards?
Start a pod and try a DNS query:
$ export KUBECONFIG="$(k3d get-kubeconfig --name='mycluster')"
$ kubectl run --restart=Never --rm -i --tty tmp --image=alpine -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup www.gmail.com
Server: 10.43.0.10
Address: 10.43.0.10:53
;; connection timed out; no servers could be reached
/ # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
/ # exit
Exec into the k3d container and do the same DNS query:
docker exec -it k3d-endpoint-server sh
/ # nslookup www.gmail.com
Server: 127.0.0.11
Address: 127.0.0.11:53
Non-authoritative answer:
www.gmail.com canonical name = mail.google.com
mail.google.com canonical name = googlemail.l.google.com
Name: googlemail.l.google.com
Address: 172.217.164.101
Non-authoritative answer:
www.gmail.com canonical name = mail.google.com
mail.google.com canonical name = googlemail.l.google.com
Name: googlemail.l.google.com
Address: 2607:f8b0:4005:80b::2005
/ # cat /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0
/ # exit
What did you expect to happen? I would expect the pods in the k3d cluster to be able to resolve DNS names
Which OS & Architecture? MacOS 10.15.3
Which version of k3d?
- output of
k3d --version
k3d version v1.7.0
Which version of docker?
- output of
docker version
docker version
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:22:34 2019
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:29:19 2019
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: v1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 6
- Comments: 91 (32 by maintainers)
Personally I fixed this by injecting a custom ConfigMap for CoreDns, by changing
to:
replacing the x with the IP of your DNS servers.
I had a quick look to the docker options and I confirm that I don’t see an option to configure the custom DNS servers on the docker network.
Maybe a feasible option would be to add a custom flag to
k3dcommand which adds the custom DNS servers to the CoreDns ConfigMap directly.For those who have the problem a simple fix is to mount your /etc/resolve.conf onto the cluster:
It will definitely move to default 👍 @felipecrs KinD does it exactly the same way.
To workaround this issue I’m patching the coredns configmap.
After more investigation I found that this could be related to the way k3d creates the docker network.
Indeed, k3d creates a custom docker network for each cluster and when this happens resolving is done through the docker daemon. The requests are actually forwarded to the DNS servers configured in your host’s resolv.conf. But through a single DNS server (the embedded one of docker).
This means that if your
daemon.jsonis, like mine, not configured to provide extra DNS servers it defaults to 8.8.8.8 which does not resolve any company address for example.It would be useful to have a custom options to provide to k3d when it starts the cluster and specify the DNS servers there, as proposed here https://github.com/rancher/k3d/issues/165
The fix with the environment variable doesn’t seem to be working for me as well. Apple laptop with M1 silicon CPU running Docker Desktop:
Also, the fix with volume creation also doesn’t seem to be working:
some quick-and-dirty bash for patching the coredns configmap with the nameservers in
/etc/resolv.conf:(code adapted from here)
For those like me for whom the mounting
k3d cluster create --volume /etc/resolv.conf:/etc/resolv.confdoesn’t fix the issue, you may be running on a host system that usessystemd-resolve. If you are, the/etc/resov.confon the host system will have a nameserver set to a 127.0.0.x address for thesystemd-resolvecache server, which won’t resolve properly inside the k3d container. Instead you’d need to mount theresolve.confthat systemd-resolve uses. For me, this was:/run/systemd/resolve/resolv.confThis makes the command I had to run to fix this issue:
k3d cluster create --volume /run/systemd/resolve/resolv.conf:/etc/resolv.confI seem to be having a similar issue with Docker Desktop for macOS and the Palo Alto Networks Global Protect VPN.
When the VPN is up, regular Docker containers don’t have any problems resolving DNS via the resolver that is pushed to the client at VPN connect. However, under k3d, no containers in any pods can resolve those same addresses.
I haven’t tried to map the resolv.conf into the coredns container yet. Why doesn’t k3d/k3s just use the host resolver?
Hey sorry to reopen this. At my company some scripts use
export K3D_FIX_DNS=1and that makes my cluster internal DNS stop working. Once I turn it off, it all start working perfectly. Real strange… Please don’t make this the default !!!Sorry (again): it seems my previous issue was due to Docker daemon. I’m using a VPN. It seems that Docker daemon pick the DNS resolution config at start (outside VPN) and does not update when VPN was activated. It still continue to use the DNS of my local network, not the corporate one. After a simple restart of Docker daemon while VPN is active solves the issue.
@iwilltry42, awesome, running the above has both external DNS and
host.k3d.internalworking without any patching required. I’ll keep using it and see how it goes.k3d: v5.0.0-rc.5 os: MacOS Big Sur 11.6 docker: Docker for Desktop 20.10.8
On a side note, I’m happy to help out in general so feel free to tag me.
@iJebus , thank you! I just use
export K3D_FIX_DNS=1andk3d cluster create testdns👍 (or with--tracewhen I need some more details)@iwilltry42 during testing the same issue I found another one in v5.0.0-rc.1. CLI switch --k3s-arg “–disable=traefik@all” not applied during the creation of a cluster, but when I’m using the same ‘options.k3s.extraArgs’ in config file - it’s working.
Another request for this
and another solution for editing the coredns configmap (this one adding a known set of needed DNS servers 10.0.99.1 and 10.0.99.2)
Ideally this could be done using something like
This only happens to me if I deploy something in the
defaultnamespace, in other namespaces it worked just fine, idk whyLE: just noticed it doesn’t matter which namespace you deploy stuff to, it’s about the network you’re in. So if I’m in the office (company LAN) I get this issue, but when I’m trying it from home, it just simply works. And I cannot say what network restrictions they applied in the company 😄
Also, @Athosone 's solution works for me now
Maybe interacting with k8s itself it’s not needed. k3s deploys whatever is under
/var/lib/rancher/k3s/server/manifests, so you could add a valid CoreDns configuration just customizing the DNS part according to the command line flag.Currently, k3d doesn’t interact with any Kubernetes resources inside the cluster (i.e. in k3s) and I tend to avoid this because of the huge dependencies on Kubernetes libraries it could draw in. Upon cluster creation this could work however by modifying the Chart that’s being auto-deployed by k3s. Not sure if this could go into k3s itself instead 🤔
@luisdavim what’s the patch that you apply?
Thanks a lot for the great work while searching a solution for the issue! Using
K3D_FIX_DNS=1solve it definitively, even after restarting the Docker daemon (Docker in WSL on my side).However, it seems that it’s not yet the default behavior with k3d v5.6.0. Any plan on this front @iwilltry42?
Hi @parg0MakSystem, can you provide some more information as per https://github.com/rancher/k3d/issues/209#issuecomment-947401078, please?
Additionally, you’re hitting another issue here:
This is because with
K3D_FIX_DNS, k3d writes an entrypoint script that modifies/etc/resolv.confinside the container, which won’t work if it’s a mounted file (just implemented a quick check in 3cc4c5c5). Even withK3D_FIX_DNS=0this may fail, but this time at the loadbalancer, which you’d have to exclude from the volume mount (as otherwise it doesn’t use the docker resolver and cannot resolve the names of the other containers anymore):k3d cluster create --volume /etc/resolv.conf:/etc/resolv.conf@server:0@gickis fix for your mentioned bug released in https://github.com/rancher/k3d/releases/tag/v5.0.0-rc.2
@iwilltry42 Yes! DNS is working like a charm with K3D_FIX_DNS=1 I’ve tested it in WSL2 with Cisco AnyConnect VPN. Resources in private VPN network are accessible.
@iwilltry42 This works on Linux and that’s amazing, thanks! Although it is still not working on Mac: https://gist.github.com/sqd/1e17af4ab24ede63aedec73fd6396141
Hi, all
One interesting observation. Am on Win10, using Docker Toolbox (i.e. VirtualBox VM running Docker). After removing all the VPN - I’ve discovered that mounting
/etc/resolv.confis braeking DNS resolution from within pods 😃As soon as I removed mounting
/etc/resolv.conf- resolution worked fine.From what I understand there is nothing special. It just mounts the volume to the container running the k3s server. Thus I guess you could mount anything and even maybe do some airgapped setup
This seems to be broken, because the coredns pod does not have an /etc/resolv.conf in it, while the ConfigMap is configured to forward to that. All the docs have pointed told me that coredns will use the $HOST resolv.conf, but when I used k3d, which uses k3s, the coredns “pod” doesn’t run as a container, or as a process on the $HOST. It runs as a process of containerd, and therefore it doesn’t get any of the correct settings.
I seem to be running into this issues about every couple of weeks. This is the only workaround that seems to “just work” so I can get back to the job I’m paid to do: