kubernetes: ndots breaks DNS resolving
/kind bug
What happened:
cannot resolve cluster-external names nslookup google.com nslookup: can’t resolve ‘(null)’: Name does not resolve
nslookup: can’t resolve ‘google.com’: Name does not resolve
What you expected to happen:
be able to resolve
How to reproduce it (as minimally and precisely as possible): create a pod with: dnsPolicy: ClusterFirst
Anything else we need to know?: cat /etc/resolv.conf nameserver 10.96.0.10 search weave.svc.cluster.local svc.cluster.local cluster.local lan.davidkarlsen.com davidkarlsen.com options ndots:5 <-- this should not be there
if removing ndots:5 it will be able to resolve external adresses as expected according to policy
Environment:
-
Kubernetes version (use
kubectl version
): kubectl version Client Version: version.Info{Major:“1”, Minor:“10”, GitVersion:“v1.10.4”, GitCommit:“5ca598b4ba5abb89bb773071ce452e33fb66339d”, GitTreeState:“clean”, BuildDate:“2018-06-06T08:13:03Z”, GoVersion:“go1.9.3”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“10”, GitVersion:“v1.10.3”, GitCommit:“2bba0127d85d5a46ab4b778548be28623b32d0b0”, GitTreeState:“clean”, BuildDate:“2018-05-21T09:05:37Z”, GoVersion:“go1.9.3”, Compiler:“gc”, Platform:“linux/amd64”} -
Cloud provider or hardware configuration: kubeadm / bare metal
-
OS (e.g. from /etc/os-release): cat /etc/os-release NAME=“Ubuntu” VERSION=“18.04 LTS (Bionic Beaver)” ID=ubuntu ID_LIKE=debian PRETTY_NAME=“Ubuntu 18.04 LTS” VERSION_ID=“18.04” HOME_URL=“https://www.ubuntu.com/” SUPPORT_URL=“https://help.ubuntu.com/” BUG_REPORT_URL=“https://bugs.launchpad.net/ubuntu/” PRIVACY_POLICY_URL=“https://www.ubuntu.com/legal/terms-and-policies/privacy-policy” VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic
-
Kernel (e.g.
uname -a
): uname -a Linux main.davidkarlsen.com 4.15.0-22-generic #24-Ubuntu SMP Wed May 16 12:15:17 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux -
Install tools:
kubeadm
- Others: kubectl --namespace kube-system get cm/kube-dns -o yaml apiVersion: v1 data: upstreamNameservers: | [“192.168.3.2”] kind: ConfigMap metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {“apiVersion”:“v1”,“data”:{“stubDomains”:“{"lan.davidkarlsen.com": ["192.168.3.2"]}\n”,“upstreamNameservers”:“["192.168.3.2", "8.8.4.4", "8.8.8.8"]\n”},“kind”:“ConfigMap”,“metadata”:{“annotations”:{},“name”:“kube-dns”,“namespace”:“kube-system”}} creationTimestamp: 2018-05-28T07:58:16Z name: kube-dns namespace: kube-system resourceVersion: “3162037” selfLink: /api/v1/namespaces/kube-system/configmaps/kube-dns uid: e0dd4d17-624c-11e8-8edc-00902755ddee
cat /etc/resolv.conf nameserver 192.168.3.2 search lan.davidkarlsen.com davidkarlsen.com
(192.168.3.2 uses upstream dns-servers and is able to resolve.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 15
- Comments: 56 (28 by maintainers)
Commits related to this issue
- fix: Backend Service does not resolve DNS. refs #577 Bug documentation: https://github.com/kubernetes/kubernetes/issues/64924 — committed to eoscostarica/eos-rate by kuronosec 3 years ago
- chore: replace git-init base image Use a glibc-based image to avoid DNS resolver bugs. Ref: https://github.com/kubernetes/kubernetes/issues/64924 — committed to katanomi/pipeline by l-qing a year ago
- chore: replace git-init base image Use a glibc-based image to avoid DNS resolver bugs. Ref: https://github.com/kubernetes/kubernetes/issues/64924 — committed to katanomi/pipeline by l-qing a year ago
- chore: replace git-init base image Use a glibc-based image to avoid DNS resolver bugs. Ref: https://github.com/kubernetes/kubernetes/issues/64924 — committed to katanomi/pipeline by l-qing a year ago
I bet your pod image is based on alpine.
You can customize pod dns similar to the example here - https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config You can set ndots to 1.
From earlier comments, this seems like a musl implementation issue rather than kubernetes dns? https://bugs.alpinelinux.org/issues/9017 says the dnsclient stops trying other searchpaths if a previous one returns unexpected error. Looks like this unexpected error is returned from whatever server is the upstream for the node. Maybe cloudFlare, something else. Setting ndots to a small value like 1, will ensure searchpath expansion does not kick in for hostnames with atleast 2 labels.
Based on this, i am closing the issue, please reopen if needed.
/close
I am also experiencing this same issue. Setting
spec.dnsConfig.options
to{name: ndots, value: "1"}
does fix it. However, I don’t want to do this for every container, is there any way that I can change the global default or do something else to resolve this.Any help is appreciated
sorry to comment on a closed topic, but was a global ndots option ever introduced?
For anyone that came here looking for solution for very similar issue where eg wget or other apps reported bad domain but nslookup worked fine also ndots:1 resolves issue but the same container works on other cluster. The issue was in extra
.
at the end of search in resolv.conf:search default.svc.cluster.local svc.cluster.local cluster.local .
that dot came fromsearch .
on the host, and removing that line resolved dns issuesRunning into the same issue, been doing the dnsConfig as an override for applications that experience this. Would love a longterm solution.
Running Kubes 1.11 with CoreDNS on bare-metal
EDIT: Also, I realized by containers aren’t running alpine image? E.g.
golang:1.10.3
i’ve got the same issue with some container today :
Linux test-77fd4f49f7-m5b75 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27) x86_64 Linux
this is my resolve.conf file :
if i remove (this is not very possible because some container are build from gitlab and others) this line :
options ndots:5
my dns is working correctly again. The issue is only on some host (not all) and if i add a.
at the end of the host everything is resolving corretlyCan someone help ?
@davidkarlsen could you try this image
davidzqwang/alpine-dns:3.7
on your cluster, I made following changes to musl, build a new image based on alpine:3.7, and I test on my cluster, it seems work.@tesharp, well, even with blank I get the same issue. Strange thing is that it works for some time and suddenly breaks…
@davidkarlsen, you can override the ndots option on the
ClusterFirst
policy by settingspec.dnsConfig.options
to{name: ndots, value: "1"}
This feature was first introduced in 1.9
ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service#pod-s-dns-config
spec: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/network/pod-resolv-conf.md
for me it doesn’t work even with
ndots:1