dashboard: Dashboard 404s on stylesheets & scripts

Environment

I’m running a K8s cluster on a CentOS7 vagrant VM. The VM has a hostnetwork adapter and the cluster is installed with kubeadm

Dashboard version: (Container version) gcr.io/google_containers/kubernetes-dashboard-amd64@sha256:2c4421ed80358a0ee97b44357b6cd6dc09be6ccc27dfe9d50c9bfc39a760e5fe
Kubernetes version:

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-14T06:36:08Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Operating system: CentOS7
Node.js version: ?
Go version: ?
Steps to reproduce
  1. Installed a cluster using kubeadm on CentOS7
  2. Run kubectl create -f https://git.io/kube-dashboard
  3. kubectl proxy --address=<routable-IP> --port=<PORT> --accept-hosts="^*$"
Observed result

Going to the dashboard in a browser I see a white screen.

Opening chrome’s dev console I see a bunch of 404s

GET https://<IP>/api/v1/namespaces/kube-system/services/kubernetes-dashboard/static/vendor.9aa0b786.css 
proxy:1 GET https://<IP>/api/v1/namespaces/kube-system/services/kubernetes-dashboard/static/app.8ebf2901.css 
proxy:5 GET https://<IP>/api/v1/namespaces/kube-system/services/kubernetes-dashboard/api/appConfig.json 
proxy:5 GET https://<IP>/api/v1/namespaces/kube-system/services/kubernetes-dashboard/static/app.68d2caa2.js 
proxy:5 GET https://<IP>/api/v1/namespaces/kube-system/services/kubernetes-dashboard/static/vendor.840e639c.js 
proxy:5 GET https://<IP>/api/v1/namespaces/kube-system/services/kubernetes-dashboard/api/appConfig.json 
proxy:5 GET https://<IP>/api/v1/namespaces/kube-system/services/kubernetes-dashboard/static/app.68d2caa2.js 

Going to one of those links I see.

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "the server could not find the requested resource",
  "reason": "NotFound",
  "details": {},
  "code": 404
}

kube-dashboard container logs

kubectl logs kubernetes-dashboard-3313488171-6v53b -n kube-system
Using HTTP port: 8443
Using in-cluster config to connect to apiserver
Using service account token for csrf signing
No request provided. Skipping authorization header
Successful initial request to the apiserver, version: v1.7.6
No request provided. Skipping authorization header
Creating in-cluster Heapster client
Could not enable metric client: Health check failed: the server could not find the requested resource (get services heapster). Continuing.
Expected result

Expected the normal kube dashboard. (I have seen this set up work on 9.13.17)

Comments

This exact set up was running yesterday 9.13, and today on 9.14 I set up my VM cluster in the same way and see this error. I’m fairly certain its not an error on my end, maybe the dashboard container was updated and caused an issue?

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 5
  • Comments: 44 (19 by maintainers)

Most upvoted comments

You can also try to access http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/. It requires a bit different set of privileges to access.

Thanks @floreks that now solves the problem for me.

TL;DR Simply add a trailing slash to the http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy url to reset issues caused by previously dodgy auth.

@floreks Just deployed a new 1.7.6 cluster via kubeadm and calico network provider and got exactly the same problem.

If i try to access the dashbaord via kubectl proxy, then i get a redirect from

http://localhost:8001/ui 
to
http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy

which just gives me a white blank page (could not load resources etc.). If i use the link you provided

http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/

it works as it should. So is there maybe a wrong redirect? The URLs are different (“proxy” at the beginning vs. at the end) …

Interestingly enough that just works

I have set up a cluster where there are 2 nodes. One is Master and Other is a node, both on different Azure ubuntu VMs. For networking, I used Canal tool.

$ kubectl get nodes
NAME             STATUS    ROLES     AGE       VERSION
ubuntu-aniket1   Ready     master    57m       v1.10.0
ubutu-aniket     Ready     <none>    56m       v1.10.0
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY     STATUS    RESTARTS   AGE
kube-system   canal-jztfd                              3/3       Running   0          57m
kube-system   canal-mdbbp                              3/3       Running   0          57m
kube-system   etcd-ubuntu-aniket1                      1/1       Running   0          58m
kube-system   kube-apiserver-ubuntu-aniket1            1/1       Running   0          58m
kube-system   kube-controller-manager-ubuntu-aniket1   1/1       Running   0          58m
kube-system   kube-dns-86f4d74b45-8zqqr                3/3       Running   0          58m
kube-system   kube-proxy-k5ggz                         1/1       Running   0          58m
kube-system   kube-proxy-vx9sq                         1/1       Running   0          57m
kube-system   kube-scheduler-ubuntu-aniket1            1/1       Running   0          58m
kube-system   kubernetes-dashboard-54865c6fb9-kg5zt    1/1       Running   0          26m

When I tried to create kubernetes Dashboard with

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

and set proxy as

$ kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
Starting to serve on [::]:8001

When I hit url http://<master IP>:8001 in browser, it shows following output

{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/",
    "/apis/admissionregistration.k8s.io",
    "/apis/admissionregistration.k8s.io/v1beta1",
    "/apis/apiextensions.k8s.io",
    "/apis/apiextensions.k8s.io/v1beta1",
    "/apis/apiregistration.k8s.io",
    "/apis/apiregistration.k8s.io/v1",
    "/apis/apiregistration.k8s.io/v1beta1",
    "/apis/apps",
    "/apis/apps/v1",
    "/apis/apps/v1beta1",
    "/apis/apps/v1beta2",
    "/apis/authentication.k8s.io",
    "/apis/authentication.k8s.io/v1",
    "/apis/authentication.k8s.io/v1beta1",
    "/apis/authorization.k8s.io",
    "/apis/authorization.k8s.io/v1",
    "/apis/authorization.k8s.io/v1beta1",
    "/apis/autoscaling",
    "/apis/autoscaling/v1",
    "/apis/autoscaling/v2beta1",
    "/apis/batch",
    "/apis/batch/v1",
    "/apis/batch/v1beta1",
    "/apis/certificates.k8s.io",
    "/apis/certificates.k8s.io/v1beta1",
    "/apis/crd.projectcalico.org",
    "/apis/crd.projectcalico.org/v1",
    "/apis/events.k8s.io",
    "/apis/events.k8s.io/v1beta1",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/apis/networking.k8s.io",
    "/apis/networking.k8s.io/v1",
    "/apis/policy",
    "/apis/policy/v1beta1",
    "/apis/rbac.authorization.k8s.io",
    "/apis/rbac.authorization.k8s.io/v1",
    "/apis/rbac.authorization.k8s.io/v1beta1",
    "/apis/storage.k8s.io",
    "/apis/storage.k8s.io/v1",
    "/apis/storage.k8s.io/v1beta1",
    "/healthz",
    "/healthz/autoregister-completion",
    "/healthz/etcd",
    "/healthz/ping",
    "/healthz/poststarthook/apiservice-openapi-controller",
    "/healthz/poststarthook/apiservice-registration-controller",
    "/healthz/poststarthook/apiservice-status-available-controller",
    "/healthz/poststarthook/bootstrap-controller",
    "/healthz/poststarthook/ca-registration",
    "/healthz/poststarthook/generic-apiserver-start-informers",
    "/healthz/poststarthook/kube-apiserver-autoregistration",
    "/healthz/poststarthook/rbac/bootstrap-roles",
    "/healthz/poststarthook/start-apiextensions-controllers",
    "/healthz/poststarthook/start-apiextensions-informers",
    "/healthz/poststarthook/start-kube-aggregator-informers",
    "/healthz/poststarthook/start-kube-apiserver-informers",
    "/logs",
    "/metrics",
    "/openapi/v2",
    "/swagger-2.0.0.json",
    "/swagger-2.0.0.pb-v1",
    "/swagger-2.0.0.pb-v1.gz",
    "/swagger.json",
    "/swaggerapi",
    "/version"
  ]
}

But when I tries to hit http://<master IP>:8001/ui I am not able to see Kubernetes dashboard. Instead I see following output

{
  "paths": [
    "/apis",
    "/apis/",
    "/apis/apiextensions.k8s.io",
    "/apis/apiextensions.k8s.io/v1beta1",
    "/healthz",
    "/healthz/etcd",
    "/healthz/ping",
    "/healthz/poststarthook/generic-apiserver-start-informers",
    "/healthz/poststarthook/start-apiextensions-controllers",
    "/healthz/poststarthook/start-apiextensions-informers",
    "/metrics",
    "/openapi/v2",
    "/swagger-2.0.0.json",
    "/swagger-2.0.0.pb-v1",
    "/swagger-2.0.0.pb-v1.gz",
    "/swagger.json",
    "/swaggerapi",
    "/version"
  ]
}

Could you please help me resolving dashboard issue?

Thanks in advance

I’ll try to check it on 1.7.6 cluster today and post the results.