dashboard: Error trying to reach service: 'dial tcp 192.168.1.62:8443: i/o timeout'

we have a 3 node cluster. 2 linux nodes 1 windows node. The cluster shows all nodes ready.

after installing the kubernetes dashboard and When trying to access the Dashboard I am getting this error

Error trying to reach service: ‘dial tcp 192.168.1.62:8443: i/o timeout’

Environment
Installation method: 

Used the manifest apply command     

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

Kubernetes version:  1.18.6
Dashboard version:  v2.0.0
Operating system:  Centos 7

Steps to reproduce

after installing the manifest updated the config to use Type: Loadbalancer

kubectl -n kubernetes-dashboard edit service kubernetes-dashboard changed type: ClusterIP to LoadBalancer

ran the proxy with command kubectl proxy --address 0.0.0.0 --accept-hosts ‘.*’ &

Observed result

Tried URL: http://aabrl-kuber01:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Result: Error trying to reach service: ‘dial tcp 192.168.1.62:8443: i/o timeout’

Tried URL: http://aabrl-kuber01:8001/

Results:

{ “paths”: [ “/api”, “/api/v1”, “/apis”, “/apis/”, “/apis/admissionregistration.k8s.io”, “/apis/admissionregistration.k8s.io/v1”, “/apis/admissionregistration.k8s.io/v1beta1”, “/apis/apiextensions.k8s.io”, “/apis/apiextensions.k8s.io/v1”, “/apis/apiextensions.k8s.io/v1beta1”, “/apis/apiregistration.k8s.io”, “/apis/apiregistration.k8s.io/v1”, “/apis/apiregistration.k8s.io/v1beta1”, “/apis/apps”, “/apis/apps/v1”, “/apis/authentication.k8s.io”, “/apis/authentication.k8s.io/v1”, “/apis/authentication.k8s.io/v1beta1”, “/apis/authorization.k8s.io”, “/apis/authorization.k8s.io/v1”, “/apis/authorization.k8s.io/v1beta1”, “/apis/autoscaling”, “/apis/autoscaling/v1”, “/apis/autoscaling/v2beta1”, “/apis/autoscaling/v2beta2”, “/apis/batch”, “/apis/batch/v1”, “/apis/batch/v1beta1”, “/apis/certificates.k8s.io”, “/apis/certificates.k8s.io/v1beta1”, “/apis/coordination.k8s.io”, “/apis/coordination.k8s.io/v1”, “/apis/coordination.k8s.io/v1beta1”, “/apis/discovery.k8s.io”, “/apis/discovery.k8s.io/v1beta1”, “/apis/events.k8s.io”, “/apis/events.k8s.io/v1beta1”, “/apis/extensions”, “/apis/extensions/v1beta1”, “/apis/metrics.k8s.io”, “/apis/metrics.k8s.io/v1beta1”, “/apis/networking.k8s.io”, “/apis/networking.k8s.io/v1”, “/apis/networking.k8s.io/v1beta1”, “/apis/node.k8s.io”, “/apis/node.k8s.io/v1beta1”, “/apis/policy”, “/apis/policy/v1beta1”, “/apis/rbac.authorization.k8s.io”, “/apis/rbac.authorization.k8s.io/v1”, “/apis/rbac.authorization.k8s.io/v1beta1”, “/apis/scheduling.k8s.io”, “/apis/scheduling.k8s.io/v1”, “/apis/scheduling.k8s.io/v1beta1”, “/apis/storage.k8s.io”, “/apis/storage.k8s.io/v1”, “/apis/storage.k8s.io/v1beta1”, “/healthz”, “/healthz/autoregister-completion”, “/healthz/etcd”, “/healthz/log”, “/healthz/ping”, “/healthz/poststarthook/apiservice-openapi-controller”, “/healthz/poststarthook/apiservice-registration-controller”, “/healthz/poststarthook/apiservice-status-available-controller”, “/healthz/poststarthook/bootstrap-controller”, “/healthz/poststarthook/crd-informer-synced”, “/healthz/poststarthook/generic-apiserver-start-informers”, “/healthz/poststarthook/kube-apiserver-autoregistration”, “/healthz/poststarthook/rbac/bootstrap-roles”, “/healthz/poststarthook/scheduling/bootstrap-system-priority-classes”, “/healthz/poststarthook/start-apiextensions-controllers”, “/healthz/poststarthook/start-apiextensions-informers”, “/healthz/poststarthook/start-cluster-authentication-info-controller”, “/healthz/poststarthook/start-kube-aggregator-informers”, “/healthz/poststarthook/start-kube-apiserver-admission-initializer”, “/livez”, “/livez/autoregister-completion”, “/livez/etcd”, “/livez/log”, “/livez/ping”, “/livez/poststarthook/apiservice-openapi-controller”, “/livez/poststarthook/apiservice-registration-controller”, “/livez/poststarthook/apiservice-status-available-controller”, “/livez/poststarthook/bootstrap-controller”, “/livez/poststarthook/crd-informer-synced”, “/livez/poststarthook/generic-apiserver-start-informers”, “/livez/poststarthook/kube-apiserver-autoregistration”, “/livez/poststarthook/rbac/bootstrap-roles”, “/livez/poststarthook/scheduling/bootstrap-system-priority-classes”, “/livez/poststarthook/start-apiextensions-controllers”, “/livez/poststarthook/start-apiextensions-informers”, “/livez/poststarthook/start-cluster-authentication-info-controller”, “/livez/poststarthook/start-kube-aggregator-informers”, “/livez/poststarthook/start-kube-apiserver-admission-initializer”, “/logs”, “/metrics”, “/openapi/v2”, “/readyz”, “/readyz/autoregister-completion”, “/readyz/etcd”, “/readyz/informer-sync”, “/readyz/log”, “/readyz/ping”, “/readyz/poststarthook/apiservice-openapi-controller”, “/readyz/poststarthook/apiservice-registration-controller”, “/readyz/poststarthook/apiservice-status-available-controller”, “/readyz/poststarthook/bootstrap-controller”, “/readyz/poststarthook/crd-informer-synced”, “/readyz/poststarthook/generic-apiserver-start-informers”, “/readyz/poststarthook/kube-apiserver-autoregistration”, “/readyz/poststarthook/rbac/bootstrap-roles”, “/readyz/poststarthook/scheduling/bootstrap-system-priority-classes”, “/readyz/poststarthook/start-apiextensions-controllers”, “/readyz/poststarthook/start-apiextensions-informers”, “/readyz/poststarthook/start-cluster-authentication-info-controller”, “/readyz/poststarthook/start-kube-aggregator-informers”, “/readyz/poststarthook/start-kube-apiserver-admission-initializer”, “/readyz/shutdown”, “/version” ] }

Expected result
Comments

command: kubectl describe pod kubernetes-dashboard-7b544877d5-qlkl6 --namespace kubernetes-dashboard Results:

Name: kubernetes-dashboard-7b544877d5-qlkl6 Namespace: kubernetes-dashboard Priority: 0 Node: aabrl-kuber02/10.243.1.213 Start Time: Tue, 15 Sep 2020 14:13:14 -0500 Labels: k8s-app=kubernetes-dashboard pod-template-hash=7b544877d5 Annotations: <none> Status: Running IP: 192.168.1.62 IPs: IP: 192.168.1.62 Controlled By: ReplicaSet/kubernetes-dashboard-7b544877d5 Containers: kubernetes-dashboard: Container ID: docker://005a539da2b4aa2c7be799c9c7bf6f2509fa8a8e440f4021067374639a1fb669 Image: kubernetesui/dashboard:v2.0.0 Image ID: docker-pullable://docker.io/kubernetesui/dashboard@sha256:06868692fb9a7f2ede1a06de1b7b32afabc40ec739c1181d83b5ed3eb147ec6e Port: 8443/TCP Host Port: 0/TCP Args: –auto-generate-certificates –namespace=kubernetes-dashboard State: Running Started: Tue, 15 Sep 2020 16:07:41 -0500 Last State: Terminated Reason: Error Exit Code: 2 Started: Tue, 15 Sep 2020 16:02:26 -0500 Finished: Tue, 15 Sep 2020 16:02:27 -0500 Ready: True Restart Count: 27 Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /certs from kubernetes-dashboard-certs (rw) /tmp from tmp-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-p2d8z (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kubernetes-dashboard-certs: Type: Secret (a volume populated by a Secret) SecretName: kubernetes-dashboard-certs Optional: false tmp-volume: Type: EmptyDir (a temporary directory that shares a pod’s lifetime) Medium: SizeLimit: <unset> kubernetes-dashboard-token-p2d8z: Type: Secret (a volume populated by a Secret) SecretName: kubernetes-dashboard-token-p2d8z Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/os=linux Tolerations: node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message


Warning BackOff 21m (x517 over 131m) kubelet, aabrl-kuber02 Back-off restarting failed container Warning FailedMount 17m kubelet, aabrl-kuber02 MountVolume.SetUp failed for volume “kubernetes-dashboard-token-p2d8z” : failed to sync secret cache: timed out waiting for the condition Normal SandboxChanged 17m kubelet, aabrl-kuber02 Pod sandbox changed, it will be killed and re-created. Normal Pulling 17m kubelet, aabrl-kuber02 Pulling image “kubernetesui/dashboard:v2.0.0” Normal Pulled 17m kubelet, aabrl-kuber02 Successfully pulled image “kubernetesui/dashboard:v2.0.0” Normal Created 17m kubelet, aabrl-kuber02 Created container kubernetes-dashboard Normal Started 17m kubelet, aabrl-kuber02 Started container kubernetes-dashboard

Command: kubectl get pods --all-namespaces -o wide Results:

NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default dnsutils 1/1 Running 129 5d6h 192.168.1.56 aabrl-kuber02 <none> <none> default frontend-84d9759d97-42jvc 1/1 Running 5 4d 192.168.1.60 aabrl-kuber02 <none> <none> default frontend-84d9759d97-7lswp 1/1 Running 4 4d 192.168.0.51 aabrl-kuber01 <none> <none> default frontend-84d9759d97-ltplz 1/1 Running 5 4d 192.168.1.58 aabrl-kuber02 <none> <none> default nginx-demo-74f994c986-cbmht 1/1 Running 5 10d 192.168.1.61 aabrl-kuber02 <none> <none> default redis-master-6ddf5ff6dc-kl9rq 1/1 Running 5 4d4h 192.168.1.59 aabrl-kuber02 <none> <none> default redis-slave-664949855f-fdqfd 1/1 Running 5 4d 192.168.1.55 aabrl-kuber02 <none> <none> default redis-slave-664949855f-hrsnw 1/1 Running 4 4d 192.168.0.49 aabrl-kuber01 <none> <none> kube-system coredns-66bff467f8-rxlmz 1/1 Running 10 26d 192.168.0.46 aabrl-kuber01 <none> <none> kube-system coredns-66bff467f8-scp4g 1/1 Running 10 26d 192.168.0.47 aabrl-kuber01 <none> <none> kube-system etcd-aabrl-kuber01 1/1 Running 7 26d 10.243.1.212 aabrl-kuber01 <none> <none> kube-system kube-apiserver-aabrl-kuber01 1/1 Running 7 26d 10.243.1.212 aabrl-kuber01 <none> <none> kube-system kube-controller-manager-aabrl-kuber01 1/1 Running 7 26d 10.243.1.212 aabrl-kuber01 <none> <none> kube-system kube-flannel-ds-amd64-2crvd 1/1 Running 7 26d 10.243.1.213 aabrl-kuber02 <none> <none> kube-system kube-flannel-ds-amd64-sw99w 1/1 Running 10 26d 10.243.1.212 aabrl-kuber01 <none> <none> kube-system kube-flannel-ds-windows-amd64-vrsg9 1/1 Running 30 26d 10.243.1.202 aabrw-kuber03 <none> <none> kube-system kube-proxy-57nsx 1/1 Running 7 26d 10.243.1.212 aabrl-kuber01 <none> <none> kube-system kube-proxy-fnskn 1/1 Running 6 26d 10.243.1.213 aabrl-kuber02 <none> <none> kube-system kube-proxy-windows-w76c5 1/1 Running 24 26d 192.168.2.50 aabrw-kuber03 <none> <none> kube-system kube-scheduler-aabrl-kuber01 1/1 Running 7 26d 10.243.1.212 aabrl-kuber01 <none> <none> kube-system metrics-server-5f956b6d5f-r77ml 1/1 Running 0 154m 192.168.0.54 aabrl-kuber01 <none> <none> kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-8fbxl 1/1 Running 0 135m 192.168.0.57 aabrl-kuber01 <none> <none> kubernetes-dashboard kubernetes-dashboard-7b544877d5-qlkl6 1/1 Running 27 135m 192.168.1.62 aabrl-kuber02 <none> <none> metallb-system controller-57f648cb96-jgcsx 1/1 Running 4 3d23h 192.168.0.52 aabrl-kuber01 <none> <none> metallb-system speaker-44r2c 1/1 Running 6 3d23h 10.243.1.213 aabrl-kuber02 <none> <none> metallb-system speaker-6jfzz 1/1 Running 6 3d23h 10.243.1.212 aabrl-kuber01 <none> <none> nginx-ingress nginx-ingress-2tst6 1/1 Running 5 10d 192.168.0.50 aabrl-kuber01 <none> <none> nginx-ingress nginx-ingress-94d765bfd-64p6q 1/1 Running 8 10d 192.168.0.48 aabrl-kuber01 <none> <none> nginx-ingress nginx-ingress-99s2h 1/1 Running 31 10d 192.168.1.57 aabrl-kuber02 <none> <none>

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 1
  • Comments: 18 (5 by maintainers)

Most upvoted comments

Not a bug. Exact same issues were already discussed dozens of times. This is a configuration issue on your side.

/close

As a relative beginner, where can I get some help on this issue? If this has been discussed a dozen times, can you throw me a bone and provide some direction to those discussions?

@llyons I am facing the same issue and managed to workaround it more or less easily. Hope it’s not too late…

First, forward the pod port:

kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 8080:443

and go to:

https://localhost:8080

This won’t work in chrome (surprisingly) because the certificate is self-signed. Use firefox and accept the security risk.

If a login form shows, we are on track. Follow those steps to generate a token (copy-paste three commands)

https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

Hope this helps.

Where can I find links to the dozens of times this has been referenced?

anyway, can someone point the real problem why proxy is not reaching the service? it is still not clear

Agreed. I’m just getting into k8s (again after some time) and I’m running into this issue. It would be nice if there would be references to the dozens of times that this was already discussed.

I had that issue on a on premise install : I was missing routes on controllers to reach workers. Be sure that you can ping any pods from any workers and controllers.

@voiser if you want to make it work in Chrome simply write thisisunsafe on the warning page when accessing Dashboard. It will redirect you to the app.

Did anyone figure out how to fix this issue? Im also having the same problem

Dashboard does not use the metrics-server directly so it wouldn’t make much sense. We are using our custom metrics-scraper. metrics-scraper uses the metrics-server to scrape and save the metrics. You would have to check the scraper, not Dashboard.

dashboard -> metrics-scraper -> metrics-server

I had kubernetes dashboard working, I mistakenly deleted it when I also deleted metrics-server as metrics-server did not work for me, re-installed it, this time it does not work, it times out … I don’t believe it is a configuration issue on my side, I did the very same steps I did previously: