dashboard: Dashboard not working after re-deployment in GCE

Environment
Dashboard version: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.0
Kubernetes version: 1.7.4 og pool and master cluster version 1.7.6
Running on GCE
Steps to reproduce

Have the defualt GCE cluster running with 1.7.5. Verify the dashboard works on http://localhost:8001/ui Then try to deploy the recomended version: https://github.com/kubernetes/dashboard/blob/master/src/deploy/recommended/kubernetes-dashboard.yaml

Observed result

The recommended version fails with error:

secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" configured
service "kubernetes-dashboard" configured
Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["services"], ResourceNames:["heapster"], APIGroups:[""], Verbs:["proxy"]}] user=&{snorre.edwin@bekk.no  [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]
Expected result

To see the dashboard

Comments

A collegue of mine deployed this kubernetes-dashboard, after a mistake and now I cant get it back. Iv tried the alternative version and other things, but I cant seem to get it working again

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 27 (8 by maintainers)

Most upvoted comments

If you enabled RBAC, just type

kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user $(gcloud config get-value account)

and

โžœ  ~ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

secret "kubernetes-dashboard-certs" unchanged
serviceaccount "kubernetes-dashboard" unchanged
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" unchanged
deployment "kubernetes-dashboard" unchanged
service "kubernetes-dashboard" unchanged

@floreks, and everyone, I think I managed to fix the issue (at least for my setup). I got some help from @liggitt on the kubernetes slack, who was super awesome.

THESE ARE ALL THE STEPS I USED:

First I determined that I did not have the correct roles installed, which should be setup by the api-server, by default:

$ kubectl get roles --all-namespaces
No resources found.

I needed to run the api-server with the flag --authorization-mode=RBAC,AllowAlways which I learned will enable RBAC by default but will drop back to AllowAlways if auth fails.

This is verified in the api-server logs which will show a bunch of lines like:

Nov 30 23:48:08 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:08.955830    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/cluster-admin
Nov 30 23:48:08 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:08.970721    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/system:discovery
Nov 30 23:48:08 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:08.985079    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/system:basic-user
Nov 30 23:48:09 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:09.005096    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/admin
Nov 30 23:48:09 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:09.032102    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/edit
Nov 30 23:48:09 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:09.048804    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/view

This is not a production recommended solution, so I needed to bind it to a role. However, it worked:

$ kubectl get roles --all-namespaces
NAMESPACE     NAME                                             AGE
kube-public   system:controller:bootstrap-signer               23m
kube-system   extension-apiserver-authentication-reader        23m
kube-system   system::leader-locking-kube-controller-manager   23m
kube-system   system::leader-locking-kube-scheduler            23m
kube-system   system:controller:bootstrap-signer               23m
kube-system   system:controller:cloud-provider                 23m
kube-system   system:controller:token-cleaner                  23m

Next I discovered that the only role that is enabled by default for SuperUser access is the system:masters group, not a particular username. So my Admin cert creation process needed to include O=system:masters as the Org name:

$ openssl genrsa -out config/ssl/admin-key.pem 2048
$ openssl req -new -key config/ssl/admin-key.pem -out config/ssl/admin.csr -subj '/C=AU/ST=Some-State/O=system:masters/CN=cluster-admin'
openssl x509 -req -in config/ssl/admin.csr -CA config/ssl/ca.pem -CAkey config/ssl/ca-key.pem -CAcreateserial -out config/ssl/admin.pem -days 365

I changed my api-server flag to only --authorization-mode=RBAC and restarted services. Using my new cert in my kubeconfig:

apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority: ssl/ca.pem
    server: https://address.to-elb.com
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: cluster-admin
  name: cluster-admin@kubernetes
users:
  - name: cluster-admin
    user:
      client-certificate: ssl/admin.pem
      client-key: ssl/admin-key.pem
current-context: cluster-admin@kubernetes

I was able to successfully query:

$ kube-deploy get roles --all-namespaces
NAMESPACE     NAME                                             AGE
kube-public   system:controller:bootstrap-signer               42m
kube-system   extension-apiserver-authentication-reader        42m
kube-system   system::leader-locking-kube-controller-manager   42m
kube-system   system::leader-locking-kube-scheduler            42m
kube-system   system:controller:bootstrap-signer               42m
kube-system   system:controller:cloud-provider                 42m
kube-system   system:controller:token-cleaner                  42m

Lastly, with the correct permissions and roles bound, I could create Dashboard with correct permissions, using only RBAC:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created

This is what worked for me, I hope anyone who finds this finds it helpful. ๐Ÿ‘

Make sure to grant yourself in GC IAM the Container Engine Admin/Cluster Admin rights. Hope this helps, but further support for that is not part of the kubernetes/dashboard project.

This looks to me like privilege escalation protection. Are you sure that with the account you want to create apply the dashboard.yaml you have the necessary right to create secrets etc.? You canโ€™t grant more permissions than your own account has in kubernetes.