kubeadm: CoreDNS Crashloop Error - 1.12.1
Is this a BUG REPORT or FEATURE REQUEST?
yes, BUG REPORT
Versions
kubeadm version (use kubeadm version): 1.12.1
Environment:
- Kubernetes version (use
kubectl version): 1.12.1 - Cloud provider or hardware configuration: Local Laptop
- OS (e.g. from /etc/os-release): Ubutnu 18.04
- Kernel (e.g.
uname -a): 4.15.0-36-generic - Others:
What happened?
After kubeadm init, and installing Calico, the coredns never recovers from crashloop and settles in a error state
What you expected to happen?
the same installation method works in 1.11.0 and all pods are in running state.
How to reproduce it (as minimally and precisely as possible)?
install the latest kubernetes via kubeadm
Anything else we need to know?
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 2
- Comments: 45 (9 by maintainers)
@bravinash please, confirm that the issue is the same on your setup.
If it’s the same than you can try to point kubelet to the original resolv.conf this way:
and remove coredns pods:
Kubelet will start them again with new configuration.
At least this fixed the issue in my setup:
I shared the solution that has worked for me here: https://stackoverflow.com/a/53414041/1005102
Try 1.4.0 to see if that behaves any differently. Could be related to the glog/klog issue we fixed in 1.4.0.
@utkuozdemir
alternatively:
cat /var/lib/kubelet/kubeadm-flags.envkubeadm writes the --resolv-conf flag in that file each time it runs join and init.I will close this issue for now as this is not reproducible anymore.
I’d propose to close this issue for 2 reasons:
@bravinash you can reopen this issue if you see it again
On my system original resolv.conf also triggered the same issue. It started to work after copying it to another file, removing one of 3 nameserver lines from it and changing --resolv-conf option in /var/lib/kubelet/kubeadm-flags.env
Anyway, we need confirmation from @bravinash that the issue is the same. It doesn’t look like Kubeadm issue for me.