kubernetes: kubernetes-dashboard pod in CrashLoopBackOff state
I am trying to intsall Kubernetes dashboard by the command:
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
.
However when I do kubectl get po -o wide --all-namespaces
I see the status of the kubernetes-dashboard pod as “CrashLoopBackOff”. The output of command kubectl logs kubernetes-dashboard-3717423461-gxrwv --namespace=kube-system
looks like this:
Starting HTTP server on port 9090 Creating API server client for https://192.168.3.1:443 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: the server has asked for the client to provide credentials
Does anyone know how to fix this issue? Thanks in advance.
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 33
- Comments: 35 (4 by maintainers)
@anuribs Thanks for updating your fix / solution here. I am closing the issue for now. Please re-open or file a new issue if you think we can improve the system usability and document in this case. Thanks!
I remember resolving the issue by first deleting the secret corresponding to the kube-system namespace, i.e
kubectl delete secret secretName -n kube-system
The api-server will then create a new secret. Now delete the dashboard pod, and the new pod spun up by the rc/deployment will use the new secret and the errors corresponding to “credentials” should be gone.
Alteast this worked for me 😃
[update] I solved the issue by manually point to apiserver using the ‘args’ attribute in ‘kubernetes-dashboard.yaml’: args: # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port
Make sure the kubernetes-dashboard is running on the master node. Draining the slave nodes and re-creating the dashboard on master solved the issue for me.
Installation: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Drain nodes: kubectl drain node_name
Same issue here. Why did @dchen1107 even close this?
looks like a persistent-long-term-problem
/reopen
In my case, I followed the troubleshooting doc here:
But even this didn’t get the dashboard up and running for me.
kubectl get pods -a -o wide --all-namespaces
) that thekubernetes-dashboard
was actually being set up on a slave node, and not on master (not sure if that’s now it should be done)Similar to @31bbb , I am experiencing this issue while using weave, However the k8s-dashboard has restarted 2569 times
Having the same issue with kubernetes 1.7 and kube-dashboard 1.6.1 on raspberry pi 3 with hypriot os. Please, can someone post the solution here? Thanks!