dashboard: Unable to access dashboard

Issue details

Unable to access dashboard on http://master_ip/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

Message “no endpoints available for service “kubernetes-dashboard””

I did the steps given in http://kubernetes.io/docs/user-guide/ui-access/, but still no result.

kubectl create -f cluster/addons/dashboard/dashboard-controller.yaml --namespace=kube-system
kubectl create -f cluster/addons/dashboard/dashboard-service.yaml --namespace=kube-system
#kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"}

When using v0.19.3, I was able to access the dashboard.

Dashboard version: v1.0.1
Kubernetes version: v1.2.4
Operating system: Gnu-linux/Ubuntu
Node.js version: -
Go version: -
Observed result

Unable to access UI

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 71 (20 by maintainers)

Most upvoted comments

Oh, perfect! I’m closing the issue. Please reopen if needed.

After trying out every fix I found, what finally granted me access to the dashbord was this URL:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:https/proxy/

Notice this part: /https:kubernetes-dashboard:https/

Without adding the https it didn’t work for me and I always got “no endpoints available for service “kubernetes-dashboard””.

Found the working link in the readme here: https://github.com/helm/charts/tree/master/stable/kubernetes-dashboard

@bryk Gist: https://gist.github.com/Rahul91/f443e58dd730e0571bcea6409adb5761 I am getting this error in http://master_ip/ui

Error: 'dial tcp 10.100.22.2:9090: i/o timeout'
Trying to reach: 'http://10.100.22.2:9090/'

I am running my master on a server with publicly accessible IP and minion on my local machine running in a local network. Is that the reason I am getting this error?

“Has anyone successfully installed kubernetes-dashboard on kubeadm?”

I believe, that thousands of people did.

I’m willing to bet thousands more have not

I see this issue too with kubernetes 1.5.4 and kubernetes-dashboard image version gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0.

I installed kubeadm referring https://kubernetes.io/docs/getting-started-guides/kubeadm/, and then installed kubernetes-dashboard by doing

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.0/src/deploy/kubernetes-dashboard.yaml

I see the kubernetes-dashboard in CrashLoopBackOff status and the k8s_kubernetes-dashboard.* container on the worker is in Exited state.

Below are the errors. Has anyone successfully installed kubernetes-dashboard on kubeadm?

# kubectl --namespace=kube-system get all
NAME                                                          READY     STATUS             RESTARTS   AGE
po/calico-policy-controller-mqsmh                             1/1       Running            0          4h
po/canal-etcd-tm2rv                                           1/1       Running            0          4h
po/canal-node-3nv2t                                           3/3       Running            0          4h
po/canal-node-5fckh                                           3/3       Running            1          4h
po/canal-node-6zgq8                                           3/3       Running            0          4h
po/canal-node-rtjl8                                           3/3       Running            0          4h
po/dummy-2088944543-09w8n                                     1/1       Running            0          4h
po/etcd-vhosakot-kolla-kube1.localdomain                      1/1       Running            0          4h
po/kube-apiserver-vhosakot-kolla-kube1.localdomain            1/1       Running            2          4h
po/kube-controller-manager-vhosakot-kolla-kube1.localdomain   1/1       Running            0          4h
po/kube-discovery-1769846148-pftx5                            1/1       Running            0          4h
po/kube-dns-2924299975-9m2cp                                  4/4       Running            0          4h
po/kube-proxy-0ndsb                                           1/1       Running            0          4h
po/kube-proxy-h7qrd                                           1/1       Running            1          4h
po/kube-proxy-k6168                                           1/1       Running            0          4h
po/kube-proxy-lhn0k                                           1/1       Running            0          4h
po/kube-scheduler-vhosakot-kolla-kube1.localdomain            1/1       Running            0          4h
po/kubernetes-dashboard-3203962772-mw26t                      0/1       CrashLoopBackOff   11         41m
NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
svc/canal-etcd             10.96.232.136    <none>        6666/TCP        4h
svc/kube-dns               10.96.0.10       <none>        53/UDP,53/TCP   4h
svc/kubernetes-dashboard   10.100.254.77    <nodes>       80:30085/TCP    41m
NAME                   DESIRED   SUCCESSFUL   AGE
jobs/configure-canal   1         1            4h
NAME                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/kube-discovery         1         1         1            1           4h
deploy/kube-dns               1         1         1            1           4h
deploy/kubernetes-dashboard   1         1         1            0           41m
NAME                                 DESIRED   CURRENT   READY     AGE
rs/calico-policy-controller          1         1         1         4h
rs/dummy-2088944543                  1         1         1         4h
rs/kube-discovery-1769846148         1         1         1         4h
rs/kube-dns-2924299975               1         1         1         4h
rs/kubernetes-dashboard-3203962772   1         1         0         41m

# kubectl --namespace=kube-system describe pod kubernetes-dashboard-3203962772-mw26t
  20m    5s    89    {kubelet vhosakot-kolla-kube2.localdomain}                        Warning    FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203962772-mw26t_kube-system(67b0d69b-0b47-11e7-8c97-7a2ed4192438)"

# kubectl --namespace=kube-system logs kubernetes-dashboard-3203962772-mw26t
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

# docker ps -a | grep -i dash
3c33cf43d5e4        gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0   "/dashboard --port=90"   54 seconds ago      Exited (1) 22 seconds ago                       k8s_kubernetes-dashboard.9eb4d80e_kubernetes-dashboard-3203962772-mw26t_kube-system_67b0d69b-0b47-11e7-8c97-7a2ed4192438_93520bd4

# docker logs k8s_kubernetes-dashboard.9eb4d80e_kubernetes-dashboard-3203962772-mw26t_kube-system_67b0d69b-0b47-11e7-8c97-7a2ed4192438_93520bd4
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

My problem is solved Because docker didn’t pull successfully k8s.gcr.io/kubernetes-dashboard-amd64 Check with docker images to make sure there is k8s.gcr.io/kubernetes-dashboard-amd64

Inspection records:

kubectl get pods --namespace=kube-system
NAME                                         READY     STATUS    RESTARTS   AGE
etcd-docker-for-desktop                      1/1       Running   0          30d
kube-apiserver-docker-for-desktop            1/1       Running   0          30d
kube-controller-manager-docker-for-desktop   1/1       Running   2          30d
kube-dns-86f4d74b45-p2xmk                    3/3       Running   0          30d
kube-proxy-mbfbb                             1/1       Running   0          30d
kube-scheduler-docker-for-desktop            1/1       Running   0          30d
kubernetes-dashboard-7b9c7bc8c9-pkhqk        0/1       ImagePullBackOff   0          1h

or

kubernetes-dashboard-7b9c7bc8c9-pkhqk        0/1       ErrImagePull   0          1h
kubectl describe pod kubernetes-dashboard-7b9c7bc8c9-pkhqk --namespace=kube-system

Warning Failed 18m (x4 over 21m) kubelet, docker-for-desktop Failed to pull image “k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0”: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

  Normal   Pulling                19m (x4 over 21m)   kubelet, docker-for-desktop  pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0"
  Warning  Failed                 18m (x4 over 21m)   kubelet, docker-for-desktop  Failed to pull image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

i’m the thousands and one that not getting it work, even follow all @floreks did.

still getting the below messages, been trying it out 48hours.

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "no endpoints available for service \"https:kubernetes-dashboard:\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

I’m not sure what it is about kubeadm specifically, but I was able to get this working by forcing dashboard to run on master. I did this using nodeSelector in the kubernetes-dashboard.yaml file:

nodeSelector:
  node-role.kubernetes.io/master:

Once I did that and re-added the service, it worked a charm!

Still having this issue. Dashboard works fine right after K8s installation, but fails to start upon reboot.

kubectl get pods -n kube-system

kubernetes-dashboard-3543765157-4ftml 0/1 CrashLoopBackOff 1 12s

kubectl logs kubernetes-dashboard-3543765157-4ftml -n kube-system

Using HTTP port: 9090 Creating API server client for https://10.3.0.1:443 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.3.0.1:443/version: dial tcp 10.3.0.1:443: getsockopt: no route to host Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

ok, please try:

kubectl run test --image {container-with-curl, e.g. gcr.io/google_containers/hyperkube-amd64:v1.3.0-beta.1 } sleep 100000

kubectl exec test… curl -k -u admin:admin https://10.0.0.1:443 kubectl exec test… curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -u admin:admin https://10.0.0.1:443

Can you post the result?

Most likely you have inconsistent certificates. Maybe you killed the cluster during boot-up, or something. It should work if you clean up everything:

docker kill $(docker ps -q) docker rm $(docker ps -aq) [reboot] sudo rm -R /var/lib/kubelet sudo rm -R /var/run/kubernetes

For me, the solution was to loosen up overzealous firewall rules preventing the dashboard from accessing the subnet associated with the flannel interface. Because this subnet changed with every docker service restart, it was a few rounds of whack-a-mole before I realized what was going on.

In case this helps someone (after being incredibly frustrated trying to get this working)… Thanks to all those who commented above!

(I was getting an error similar to the OP, with no endpoints available for the service when accessing the URL, and the logging showing: Error: ‘dial tcp 10.100.22.2:9090: i/o timeout’ Trying to reach: ‘http://10.100.22.2:9090/)

Raspbian Buster, 3x raspberry pi 4 cluster. Wasn’t able to access dashboard by following the instructions - dashboard pod not running on the master, using flannel, setup mostly following the guide here: teamserverless/k8s-on-raspbian Guide (with some badly formatted notes on my fork here )

This worked for me to get dashboard working after running kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml per the instructions…

On each node, edit /etc/sysctl.d/99-sysctl.conf sudo nano /etc/sysctl.d/99-sysctl.conf

uncomment the line net.ipv4.ip_forward=1

add the lines net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1

reboot

run kubectl proxy on the master

on the master (gui desktop), use your browser to navigate to http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

That takes you to the token login page… 😃

Actually, what worked for me was to run this command on the nodes: sudo iptables -P FORWARD ACCEPT

The problem was that packets where not leaving nodes, so none of the pods that were running on the nodes (and not the master) had any connectivity.

Found the solution in this related post: https://github.com/kubernetes/kubernetes/issues/45022

To make this change persistent, add this line to /etc/sysctl.conf (I’m using Ubuntu 16.04): net.ipv4.ip_forward=1

Then, if you run “sudo iptables-save”, you should see ip forwarding enabled: *filter :FORWARD ACCEPT [4:1088]

@floreks Thanks you so much for your reply and now dashboard working on minion perfectly.

OS: CentOS 7.3 stop the firewall

$ systemctl stop firewalld
$ systemctl disable firewalld

and make sure /usr/lib/sysctl.d/00-system.conf config are

net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

and iptable rules are

iptables -I INPUT -p tcp -m tcp --dport 8472 -j ACCEPT
iptables -I INPUT -p tcp -m tcp --dport 6443 -j ACCEPT
iptables -I INPUT -p tcp -m tcp --dport 9898 -j ACCEPT
iptables -I INPUT -p tcp -m tcp --dport 10250 -j ACCEPT

Kube is running locally.

~$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.0.0.1 <none> 443/TCP 1d

I think Kube service is there and kube-dash autodiscovery also locationg it properly.