minikube: CoreDNS fails on minions on multi-node clusters. Can't resolve external DNS from non-master pods.
So, I already fixed this and lost some of the logs. But itβs pretty straight-forward.
- Make a cluster
minikube start --vm-driver=kvm2 --cpus=2 --nodes 3 --network-plugin=cni \
--addons registry --enable-default-cni=false \
--insecure-registry "10.0.0.0/24" --insecure-registry "192.168.39.0/24" \
--extra-config=kubeadm.pod-network-cidr=10.244.0.0/16 \
--extra-config=kubelet.network-plugin=cni
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
n.b. I built from head a couple days ago
minikube version: v1.10.0-beta.2
commit: 80c3324b6f526911d46033721df844174fe7f597
- make a pod on master and a pod on a node
- from node pod:
curl google.com - from master pod:
curl google.com
CoreDNS was crashing per https://github.com/kubernetes/kubernetes/issues/75414
Fixed with
kubectl patch deployment coredns -n kube-system --patch '{"spec":{"template":{"spec":{"volumes":[{"name":"emptydir-tmp","emptyDir":{}}],"containers":[{"name":"coredns","volumeMounts":[{"name":"emptydir-tmp","mountPath":"/tmp"}]}]}}}}'
Edit: had wrong flannel yaml listed.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 1
- Comments: 23 (5 by maintainers)
I also still see problems with multi node clusters and kvm2. This happens on first creation of the cluster but also on restarting of the cluster. Here u see the logs when I restart a 3 node cluster.
CoreDNS pod is running but the problem seems that its started too early. Logs of CoreDNS pod.
After restarting the CoreDNS pod, there are no more erros visible in the logs and DNS starts working.
@tstromberg can we reopen this issue or create a new one for it?
Okay, I think Iβve figured something out. Iβm going to open a new ticket. This is all based on problems in the iptables. Iβll add a link to the new ticket when I get it put together.
This issue seems to be closed by mistake.
If I understood correctly @tstromberg wrote βDoes not fix #β¦β in his PR and issue got closed automatically w/o taking βDoes notβ part into consideration π
btw I can confirm that the issue persists on latest MacOS and minikube v1.17.1 (latest), when I run it like this:
minikube start --nodes 2 --vm-driver=hyperkitDNS resolves fine inside minikube nodes, but containers fail to resolve.
After testing, I can confirm that resolution of Kubernetes hosts from non-master pods is broken. I was not able to replicate issues with DNS resolution, however.
In a nutshell, I believe that the issue of CoreDNS access from non-master nodes is a sign of a broken CNI configuration. Iβll continue to investigate.
My tests were based on https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution.
My env Ubuntu 19.10 Minikube v1.11.0 Multi-node KVM2
Scenario 1
minikube start -p dns --cpus=2 --memory=2g --nodes=2 --driver=kvm2 --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.confScenario 2
minikube start -p dns --cpus=2 --memory=2g --nodes=2 --driver=kvm2 --enable-default-cni=false --network-plugin=cniConclusion:
I checked connectivity in the pods via launching a pod on each node and trying to connect to each other with nc.
workers work. master connectivity is not.
I deleted the coredns pods and they restarted on the non master nodes. and dns started working.
So something is not working with kindnet on the master.