dashboard: Unable to run dashboard.

Environment
Dashboard version: v1.8.0
Kubernetes version: v1.8.2
Operating system: Centos7
Node.js version:
Go version: go version go1.8.3 linux/amd64
Steps to reproduce

Ran the following :

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

Observed result

Pod logs

2017/12/14 18:20:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.
2017/12/14 18:20:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2017/12/14 18:20:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout
2017/12/14 18:20:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.
2017/12/14 18:20:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2017/12/14 18:20:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout
2017/12/14 18:20:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.
2017/12/14 18:20:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2017/12/14 18:20:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout
2017/12/14 18:20:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.
2017/12/14 18:20:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2017/12/14 18:20:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout
2017/12/14 18:20:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.
2017/12/14 18:20:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2017/12/14 18:20:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout
2017/12/14 18:20:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.
panic: secrets is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot create secrets in the namespace "kube-system"

goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.(*rsaKeyHolder).init(0xc42025bfc0)
        /home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:132 +0x2d3
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.NewRSAKeyHolder(0x1a78da0, 0xc4201d5260, 0xc4201d5260, 0x1278920)
        /home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:171 +0x83
main.initAuthManager(0x1a77300, 0xc420067a40, 0x384, 0x1, 0x1)
        /home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:160 +0x12f
main.main()
        /home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:94 +0x27b

describing pod


Events:
  Type     Reason                 Age                From               Message
  ----     ------                 ----               ----               -------
  Normal   Scheduled              5m                 default-scheduler  Successfully assigned kubernetes-dashboard-7486b894c6-8phqr to master-3
  Normal   SuccessfulMountVolume  5m                 kubelet, master-3  MountVolume.SetUp succeeded for volume "tmp-volume"
  Normal   SuccessfulMountVolume  5m                 kubelet, master-3  MountVolume.SetUp succeeded for volume "kubernetes-dashboard-certs"
  Normal   SuccessfulMountVolume  5m                 kubelet, master-3  MountVolume.SetUp succeeded for volume "kubernetes-dashboard-token-c4w59"
  Warning  BackOff                5m (x5 over 5m)    kubelet, master-3  Back-off restarting failed container
  Normal   Pulled                 5m (x4 over 5m)    kubelet, master-3  Container image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.0" already present on machine
  Normal   Created                5m (x4 over 5m)    kubelet, master-3  Created container
  Normal   Started                5m (x4 over 5m)    kubelet, master-3  Started container
  Warning  FailedSync             45s (x28 over 5m)  kubelet, master-3  Error syncing pod
Expected result

Expected to work.

Comments

Service account has been created

 $ kubectl get serviceaccounts -n kube-system
NAME                   SECRETS   AGE
default                1         13d
flannel                1         13d
kube-dns               1         13d
kubernetes-dashboard   1         7m
tiller                 1         12d

However secrets contain no data?


 $ kubectl describe secret kubernetes-dashboard-certs -n kube-system
Name:         kubernetes-dashboard-certs
Namespace:    kube-system
Labels:       k8s-app=kubernetes-dashboard
Annotations:
Type:         Opaque

Data
====

Seems to have the create secrets permission

 $ kubectl describe role kubernetes-dashboard-minimal -n kube-system
Name:         kubernetes-dashboard-minimal
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"kubernetes-dashboard-minimal","namespace":"kube-system"...
PolicyRule:
  Resources       Non-Resource URLs  Resource Names                     Verbs
  ---------       -----------------  --------------                     -----
  configmaps      []                 []                                 [create]
  configmaps      []                 [kubernetes-dashboard-settings]    [get update]
  secrets         []                 []                                 [create]
  secrets         []                 [kubernetes-dashboard-certs]       [get update delete]
  secrets         []                 [kubernetes-dashboard-key-holder]  [get update delete]
  services        []                 [heapster]                         [proxy]
  services/proxy  []                 [heapster]                         [get]
  services/proxy  []                 [http:heapster:]                   [get]
  services/proxy  []                 [https:heapster:]                  [get]

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 29 (8 by maintainers)

Most upvoted comments

For those still struggling with this issue, adding this to my cluster solved it for me:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

Source: https://blog.tekspace.io/kubernetes-dashboard-remote-access/

Edit: I should mention (to anyone who finds this post) that this ClusterRoleBinding will permit the kubernetes-dashboard ServiceAccount full-control over your entire cluster. This should be chosen carefully, after reviewing the Access-Control wiki page posted below. @divyangjp has proposed a much more restricted ServiceAccount below with tighter control on what the account can and can’t do.

I meet this issue too, dashboard version is 1.8.1

how to fix it ?

Please reopen this issue @floreks

Closing as stale. Ping us or folks from kubernetes-users channel on slack if you need further assistance with cluster configuration.

Above comment is for dashboard viewing purpose. But to actually setup dashboard, use this minimal Role and RoleBinding

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal-role
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

  
---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal-rolebinding
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal-role
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

@senorequeso That’s a security risk. Anybody who chooses to use the solution by @senorequeso check the page https://github.com/kubernetes/dashboard/wiki/Access-control. In the section Admin Privileges it clearly states that IMPORTANT: Make sure that you know what you are doing before proceeding. Granting admin privileges to Dashboard's Service Account might be a security risk.

@alexvicegrab Here’s a lot more restricted configuration then cluster-admin

It creates new ServiceAccount named dashboard-viewer and gives it view permission only. It can see the resources except secrets and can’t edit/update anything. Is this better?

apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-viewer
  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
- kind: ServiceAccount
  name: dashboard-viewer
  namespace: kube-system

To get token for logging in the dashboard, use this query

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep dashboard-viewer | awk '{print $1}')