dashboard: Unable to access dashboard
Issue details
Unable to access dashboard on http://master_ip/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Message “no endpoints available for service “kubernetes-dashboard””
I did the steps given in http://kubernetes.io/docs/user-guide/ui-access/, but still no result.
kubectl create -f cluster/addons/dashboard/dashboard-controller.yaml --namespace=kube-system
kubectl create -f cluster/addons/dashboard/dashboard-service.yaml --namespace=kube-system
#kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"}
When using v0.19.3, I was able to access the dashboard.
Dashboard version: v1.0.1
Kubernetes version: v1.2.4
Operating system: Gnu-linux/Ubuntu
Node.js version: -
Go version: -
Observed result
Unable to access UI
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 71 (20 by maintainers)
Oh, perfect! I’m closing the issue. Please reopen if needed.
After trying out every fix I found, what finally granted me access to the dashbord was this URL:
Notice this part: /https:kubernetes-dashboard:https/
Without adding the https it didn’t work for me and I always got “no endpoints available for service “kubernetes-dashboard””.
Found the working link in the readme here: https://github.com/helm/charts/tree/master/stable/kubernetes-dashboard
@bryk Gist: https://gist.github.com/Rahul91/f443e58dd730e0571bcea6409adb5761 I am getting this error in http://master_ip/ui
I am running my master on a server with publicly accessible IP and minion on my local machine running in a local network. Is that the reason I am getting this error?
Got it working, i rebuild kuber-cluster and now it’s show’s web UI:
http://127.0.0.1:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/pod?namespace=default and http://10.0.0.96/#/workload?namespace=default
Thx.
I’m willing to bet thousands more have not
I see this issue too with kubernetes
1.5.4and kubernetes-dashboard image versiongcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0.I installed
kubeadmreferring https://kubernetes.io/docs/getting-started-guides/kubeadm/, and then installed kubernetes-dashboard by doingI see the kubernetes-dashboard in
CrashLoopBackOffstatus and thek8s_kubernetes-dashboard.*container on the worker is inExitedstate.Below are the errors. Has anyone successfully installed kubernetes-dashboard on
kubeadm?My problem is solved Because docker didn’t pull successfully
k8s.gcr.io/kubernetes-dashboard-amd64Check withdocker imagesto make sure there isk8s.gcr.io/kubernetes-dashboard-amd64Inspection records:
or
Warning Failed 18m (x4 over 21m) kubelet, docker-for-desktop Failed to pull image “k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0”: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
i’m the thousands and one that not getting it work, even follow all @floreks did.
still getting the below messages, been trying it out 48hours.
I’m not sure what it is about kubeadm specifically, but I was able to get this working by forcing dashboard to run on master. I did this using nodeSelector in the kubernetes-dashboard.yaml file:
Once I did that and re-added the service, it worked a charm!
Still having this issue. Dashboard works fine right after K8s installation, but fails to start upon reboot.
kubectl get pods -n kube-systemkubectl logs kubernetes-dashboard-3543765157-4ftml -n kube-systemok, please try:
kubectl run test --image {container-with-curl, e.g. gcr.io/google_containers/hyperkube-amd64:v1.3.0-beta.1 } sleep 100000
kubectl exec test… curl -k -u admin:admin https://10.0.0.1:443 kubectl exec test… curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -u admin:admin https://10.0.0.1:443
Can you post the result?
Most likely you have inconsistent certificates. Maybe you killed the cluster during boot-up, or something. It should work if you clean up everything:
docker kill $(docker ps -q) docker rm $(docker ps -aq) [reboot] sudo rm -R /var/lib/kubelet sudo rm -R /var/run/kubernetes
For me, the solution was to loosen up overzealous firewall rules preventing the dashboard from accessing the subnet associated with the flannel interface. Because this subnet changed with every docker service restart, it was a few rounds of whack-a-mole before I realized what was going on.
In case this helps someone (after being incredibly frustrated trying to get this working)… Thanks to all those who commented above!
(I was getting an error similar to the OP, with no endpoints available for the service when accessing the URL, and the logging showing: Error: ‘dial tcp 10.100.22.2:9090: i/o timeout’ Trying to reach: ‘http://10.100.22.2:9090/’)
Raspbian Buster, 3x raspberry pi 4 cluster. Wasn’t able to access dashboard by following the instructions - dashboard pod not running on the master, using flannel, setup mostly following the guide here: teamserverless/k8s-on-raspbian Guide (with some badly formatted notes on my fork here )
This worked for me to get dashboard working after running
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yamlper the instructions…On each node, edit
/etc/sysctl.d/99-sysctl.confsudo nano /etc/sysctl.d/99-sysctl.confuncomment the line
net.ipv4.ip_forward=1add the lines
net.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-ip6tables = 1reboot
run
kubectl proxyon the masteron the master (gui desktop), use your browser to navigate to http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
That takes you to the token login page… 😃
Actually, what worked for me was to run this command on the nodes: sudo iptables -P FORWARD ACCEPT
The problem was that packets where not leaving nodes, so none of the pods that were running on the nodes (and not the master) had any connectivity.
Found the solution in this related post: https://github.com/kubernetes/kubernetes/issues/45022
To make this change persistent, add this line to /etc/sysctl.conf (I’m using Ubuntu 16.04): net.ipv4.ip_forward=1
Then, if you run “sudo iptables-save”, you should see ip forwarding enabled: *filter :FORWARD ACCEPT [4:1088]
@floreks Thanks you so much for your reply and now dashboard working on minion perfectly.
OS: CentOS 7.3 stop the firewall
and make sure /usr/lib/sysctl.d/00-system.conf config are
and iptable rules are
Kube is running locally.
~$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.0.0.1 <none> 443/TCP 1d
I think Kube service is there and kube-dash autodiscovery also locationg it properly.