dns: nslookup: can't resolve 'kubernetes.default'
Hello I hope this is the right place to post my issue. Forgive me if this isnt and please redirect me to the right place.
I am trying to install a cluster with one master (server-1) and one minion (server-2) running on ubuntu and using flannel for networking and using kubeadm to install master and minion. And I am trying to run the dashboard from the minion server-2 as discussed here. I am very new to kubernetes and not an expert on linux networking setup, so any help would be appreciated. Dashboard is not working and after some investigation seems to be a DNS issue.
kubectl and kubeadm : 1.6.6
Docker: 17.03.1-ce
My DNS service is up and exposing endpoints
ubuntu@server-1:~$ kubectl get svc --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 10.96.0.1 <none> 443/TCP 20h
kube-system kube-dns 10.96.0.10 <none> 53/UDP,53/TCP 20h
kube-system kubernetes-dashboard 10.97.135.242 <none> 80/TCP 3h
ubuntu@server-1:~$ kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 10.244.0.4:53,10.244.0.4:53 17h
I created a busy-box pod and when I do a nslookup from it I got the following errors. Note that the command hang for some time before returning the error.
ubuntu@server-1:~$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes.default'
ubuntu@server-1:~$ kubectl exec -ti busybox -- nslookup kubernetes.local
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes.local'
ubuntu@server-1:~$ kubectl exec -ti busybox -- nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes'
ubuntu@server-1:~$ kubectl exec -ti busybox -- nslookup 10.96.0.1
Server: 10.96.0.10
Address 1: 10.96.0.10
Name: 10.96.0.1
Address 1: 10.96.0.1
Resolv.conf seems properly configured
ubuntu@server-1:~$ kubectl exec busybox cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local local
options ndots:5
DNS pod is running
ubuntu@server-1:~$ kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
kube-dns-692378583-5zj21 3/3 Running 0 17h
Here is iptables from server 1
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-ISOLATION all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
Chain KUBE-SERVICES (2 references)
target prot opt source destination
REJECT tcp -- anywhere 10.103.141.154 /* kube-system/kubernetes-dashboard: has no endpoints */ tcp dpt:http reject-with icmp-port-unreachable
here are iptables from server-2
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-ISOLATION all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
Chain KUBE-SERVICES (2 references)
target prot opt source destination
REJECT tcp -- anywhere 10.103.141.154 /* kube-system/kubernetes-dashboard: has no endpoints */ tcp dpt:http reject-with icmp-port-unreachable
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 15 (4 by maintainers)
My guess is that the ā(null)ā response is for the AAAA request. Some resolvers will send A and AAAA requests simultaneously. Try
nslookup -type=A kubernetes.default.That said, the (null) is a bogus response either way and not sure why it is there.
On Mon, Aug 27, 2018 at 3:06 AM jazoom notifications@github.com wrote: