dashboard: kubeconfig files cannot login the dashboard
Environment
Dashboard version:1.71
Kubernetes version:1.76
Operating system:centos7
Node.js version:
Go version:
Steps to reproduce
the dashboard response Not enough data to create auth info structure.
cat kubeconfig apiVersion: v1 clusters:
- cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR3akNDQXFxZ0F3SUJBZ0lVZDVOb3JqbTRST05jVEk4eDBGMUZKQjgvdDlnd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1p6RUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0ZOb1pXNTZhR1Z1TVJFd0R3WURWUVFIRXdoVAphR1Z1ZW1obGJqRU1NQW9HQTFVRUNoTURhemh6TVE4d0RRWURWUVFMRXdaVGVYTjBaVzB4RXpBUkJnTlZCQU1UCkNtdDFZbVZ5Ym1WMFpYTXdIaGNOTVRjd056SXhNRGN6TURBd1doY05Nakl3TnpJd01EY3pNREF3V2pCbk1Rc3cKQ1FZRFZRUUdFd0pEVGpFUk1BOEdBMVVFQ0JNSVUyaGxibnBvWlc0eEVUQVBCZ05WQkFjVENGTm9aVzU2YUdWdQpNUXd3Q2dZRFZRUUtFd05yT0hNeER6QU5CZ05WQkFzVEJsTjVjM1JsYlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1ClpYUmxjekNDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFMbHVlMlZPTnp5Y3Zxak4KempmYktBSFV4TUx5M3dGUnNhY0FOeEh2R1JNZXFNM3MxcDFJek1kUkc0c2ZSMG5DT0xxOFBHS2g1UzlCQlh2aApNb0RKc0tQQWZic3QyaHpkYThNYUNKMkVYLzdoTFhicUFLMXZZR1E0bEY0NUF5YWEwcVBsc0xlVEM0Wm1lYnZ4CklkajV3MDRGdnl0cVZoUGw3TmIzcEtVWjJ3a2FGREpIVEszZUlhWkc5QTZGMkNpNTYyOTN0MFpLZDJmZWdWMjEKUEtING5xRllXREk4MU5QWFk3UmNuT29ST0NFeDBQLzh4eHRnT1VIdVVUQ29Lc2tyWUhOWjhzc04vVjM3YVY5bQo5TjQ4UHE3RjBsVFN2a1gxaGIzM0RMK0thT0VTa05UYzRJWVJkbHlBaTNHbmJZSXgwU1gvY2swa0NHWWgxc2ZOCmhpUStPcDBDQXdFQUFhTm1NR1F3RGdZRFZSMFBBUUgvQkFRREFnRUdNQklHQTFVZEV3RUIvd1FJTUFZQkFmOEMKQVFJd0hRWURWUjBPQkJZRUZMcmJid2lTQ0xHNENWa2NRL1VaOXBXZVlsMGZNQjhHQTFVZEl3UVlNQmFBRkxyYgpid2lTQ0xHNENWa2NRL1VaOXBXZVlsMGZNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUJhWmxaMVRZTE1mTGdPClZCeFhiaWE2TE9FaWQwQ3dZTVJVN2tnMmRTYVVGQjY3THNna3ZNT1NxVTlzR1ZienBwOFlscHJYVk52VXV0ajkKOW9EVExCeDA5aG40SnZXSUIwSXNxNTlQc0NxSEtaZlR3UXNXWHFJUFNkL2w5R0tJRVJxM2ZYZFl1QVpMZ1NldwpIUzdLdkc4Mm9oMG5GRGV1UVowSEpRQ09tWG1BdklwTnZLd3p3TTBtdWl6MklDVkhrVEhIZFZidUJMcTJsYnRMCjdld29sS3VSRmZTQk1oRG4wdUxpTmJYZGY1dGFNVUxvOUdjS3Fnc0hFMEVnbU0xMjJPOHEyVmxLeGZBOTRTSXIKNjFvdndoTHhubVZOYkRXdTE4dDhPYTVUQkFxSDgyczJrQ0dZRXpZNjlKS0Vaam1LZzVsUjBCcEd1U1JVZXJhaApOT3VRSG5ETgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https://192.168.0.5:6443 name: kubernetes contexts:
- context: cluster: kubernetes namespace: tuxiaogang user: tuxiaogang name: kubernetes current-context: kubernetes kind: Config preferences: {} users:
- name: tuxiaogang user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQyekNDQXNPZ0F3SUJBZ0lVRForNW9kRWlkWVRvM1FnUFlIQWZoR0xWZGpBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1p6RUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0ZOb1pXNTZhR1Z1TVJFd0R3WURWUVFIRXdoVAphR1Z1ZW1obGJqRU1NQW9HQTFVRUNoTURhemh6TVE4d0RRWURWUVFMRXdaVGVYTjBaVzB4RXpBUkJnTlZCQU1UCkNtdDFZbVZ5Ym1WMFpYTXdIaGNOTVRjeE1ERXpNVE0xTVRBd1doY05NamN4TURFeE1UTTFNVEF3V2pCbk1Rc3cKQ1FZRFZRUUdFd0pEVGpFUk1BOEdBMVVFQ0JNSVUyaGxibnBvWlc0eEVUQVBCZ05WQkFjVENGTm9aVzU2YUdWdQpNUXd3Q2dZRFZRUUtFd05yT0hNeER6QU5CZ05WQkFzVEJsTjVjM1JsYlRFVE1CRUdBMVVFQXhNS2RIVjRhV0Z2CloyRnVaekNDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFMeU9sWk5VL2RNb01ZT1MKWWZ6a1oxYUF2T2Rwd1BxRTgrempNdW9DSUVRQnJiMDVzbGF4ekYvYW9hQzcveE5rQW4ycVZVbEwyQlB2bEJ5MApBVEtwMFh4TlRpdW1sZGJZaFYxZmlMbysxY2VpajU2d3NITGNkNEZUeU56NG11SHFYUTA3NStXRDFRRWpZeFEwCktPRzJyQlg1YmtJMDJMUVIvc2U4SWZIdEdUQ3VFWTJwcndyRUl4UWk4b0FRazNRLzI5SDdpcjB5ZWxPWkxxdjIKcXhlRjc1N1hXZFNMWmN1WmNBV2RNWks0VlA5alJBeG9yVmpubFZkU2drUnBpeTA2Z2dZTUk2OHp0TkppNEw3TQpVQXp3WUZRUDhKZ1BQb3RmdEY1MzEwalNHRnhYejNoZHhLQjNWNGhJZlFxbHpkaGY1SEgydEVXTlVwQ0Y2YUwzCnBFU2V4ZGNDQXdFQUFhTi9NSDB3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUYKQndNQkJnZ3JCZ0VGQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTUIwR0ExVWREZ1FXQkJTcXdsOEo3VGlMVXVEawpjaVhwQ0d6YmIyaUJjREFmQmdOVkhTTUVHREFXZ0JTNjIyOElrZ2l4dUFsWkhFUDFHZmFWbm1KZEh6QU5CZ2txCmhraUc5dzBCQVFzRkFBT0NBUUVBU0NlUStnL0xhV1JEeGZSVzU1T0ZHK3M3SUY3RVZYSlI2RG5ObWVaN2NncjUKaWJWUzZzSElKL1ZOOXBnMlZFWWJoZ3B1ZmdKelM4Mm5ibVhUMUMwYlJ5d1ZRekhxaVNKWkhRdStoL2YvQVMyUApjTUtiY3YzS1dzL3dtekhCZmR3eFdBdTVQektEekJJUDhFNTg3U2ZJU2FZbGtKWm1iN1FYVWN5TEU5bUF3blVCCmVoL2Erd0tFa3ZBNXNhS3Y5NUNyMnRNbmM5MjJJcDB0SUh2RlBhclk4OFdWSXFhdnJWdnY5cjJ4Nm1nOHV0Y0wKZzczc0dzclZYRVFOTXIwcWsxMVd2SzV3dW0rRkdubW4zQUZuaG55ODI1SUN5VGJrV0dOdENlWEt2V0wyZUovdApSNXhuRW5wMjRZTm5iZzBybVB0YVUyeE1wa3pYVCtWcHJBOEE3bVpvV0E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdkk2VmsxVDkweWd4ZzVKaC9PUm5Wb0M4NTJuQStvVHo3T015NmdJZ1JBR3R2VG15ClZySE1YOXFob0x2L0UyUUNmYXBWU1V2WUUrK1VITFFCTXFuUmZFMU9LNmFWMXRpRlhWK0l1ajdWeDZLUG5yQ3cKY3R4M2dWUEkzUGlhNGVwZERUdm41WVBWQVNOakZEUW80YmFzRmZsdVFqVFl0QkgreDd3aDhlMFpNSzRSamFtdgpDc1FqRkNMeWdCQ1RkRC9iMGZ1S3ZUSjZVNWt1cS9hckY0WHZudGRaMUl0bHk1bHdCWjB4a3JoVS8yTkVER2l0CldPZVZWMUtDUkdtTExUcUNCZ3dqcnpPMDBtTGd2c3hRRFBCZ1ZBL3dtQTgraTErMFhuZlhTTklZWEZmUGVGM0UKb0hkWGlFaDlDcVhOMkYva2NmYTBSWTFTa0lYcG92ZWtSSjdGMXdJREFRQUJBb0lCQVFDeWx6WXl3c2hhekhJQgpWWTk3MFBYVHA4SEVTWlVmY3ZmNlFjTkNnMXIrTHJ6WlFpR1pIWFFld2R4ZWVsR0Jrek1NeFYxY08vcmYvd1pCCkhYa1kvR0ZQSTRWTHNNK3hHNGxOeENPamk4bzkrTW1oRzJjMGszNlpQcnM4R0RmU2pJRXYvTEtLMzQvTE1USXgKdTZtUkI4ejhUekRRZ205U05zMGpieHlUb09kQUE2cyt4QXBWS215LysySjB1bjFUbnM3YmZMVEFXS21HNjR1dwp1cW45a3pBRkRXL1MrNms0clJrcGUvOGhzYklQRW1lRWE4dkJZSG54ZkRqVXI2NjVBQ1ZDNHorQy9RTUF5Um5vCkdFcXJGempURW45Unk0YnNpMWFoSHRvbDRqNW5IZERHVnIvMWJteC85d0dpaVdTc3JYS3FPMnhxMm1sV25sV1MKL1Z5ays1QXhBb0dCQVBtM2dEd2VGR2xlSERDUWlwdXhvMkFEQkVUMUc0YUd6QlozKzlaVVpmdnArNUFoWXJNcAp3dklKUVc0N2VIclVvQUpaOUxUcnpmWXJ5L3lqOWREN3JxeWZXaEI4QThsbVluVVd3eUZvdDN5aG1MOFJ2K2ZQCnB5TzFnbXpWd29FSXZ1QU5VZTQrNmlWTXRac29VVjJwZzFDeHNicEJodWRYb3grNzZjcnRLRGIvQW9HQkFNRk4KSW8yS2lETWQ2d0FESkdvUXYxS2NZRjQwYXUzcGswSTZMbDBrdzVKMUdNNDdCeWk5Syt6VU1EV09tY2ZwQjVtYwo1UWVreDVZaFJyWWZrdFNHQ2V6U2ZMTEJUdXErQXExL2dLMVRqTGF6ZnMwS3dSMFYweXRmV1M3QzUwRm45akVKCmkxOUtaSWxIdEhyRDNEejhPR216eC9Rak84R0toV3kraS9TWHdRa3BBb0dCQUtSbGF2V284OVVlVUw2a0dheEEKT1FjM1ZUTTBqZ2QxYkp5S0p2QkdKZEcvaTQ2cWUvanBVRjdaT3dzZitjUWJnSytybXc4VWdrWkROUXJBd2s3dgpzbUlRa2xGeDQyaE9rQmozZ0VUWlZKcW5KQkQ5MVhIOTRkSC9aN3JReXpqNWtmZWNyVWlFZ005SGZmT0VpblIzCjZXeFJYMmo0UktDK3NEUnZHSTR3clIzdkFvR0FCWVkvMDQyL0FMNzlKVjN4bjNwbERXWmN0clNHemMvY0hvdHQKSWNwWU1JcGFNQ0t0dkxOVFd3eGhhRlp2L0sralFQZWo4QWo4ajBUYU1ZQkxnUGxudFRYNnpGMEw5VmVDMmhTSAp4K3hZWEN4YkZsOFZUOUI4M1lOM0dBZ0g5ZTJUc3FrVUs2QURxWXk4RXJvZ1JEbnRIdEE5aWJPc0ZJYng4ejZxCjMwMnEvYWtDZ1lBNWRkb1p4Y2ljb1RvSmJRMWg0aDhmSSt1UlhUdk1hM3VDNmcxNlQ2cUcvQXgxRk5iSll3SkUKS1UxcmgwUkRkMXYzdWJPRnJ4Uk1SVDNLK3F1YjZqbEdpMkcvR1Q0Wm02b3o1TUVMajFGb2g4cGplSjNqdkFkTQpnRm5kdTMxaFdlbkJOWFBEQ1NZTDFTVjl1NlFqVlJtSlRqeHEzSGdEVlVyYUhjNWpjbkFqVkE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
cat [root@master1 ~]# cat /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver
–admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
–advertise-address=192.168.0.5
–bind-address=192.168.0.5
–insecure-bind-address=192.168.0.5
–kubelet-https=true
–runtime-config=rbac.authorization.k8s.io/v1beta1
–authorization-mode=RBAC
–experimental-bootstrap-token-auth
–token-auth-file=/etc/kubernetes/ssl/token.csv
–service-cluster-ip-range=172.17.0.0/16
–service-node-port-range=300-9000
–tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem
–tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem
–client-ca-file=/etc/kubernetes/ssl/ca.pem
–service-account-key-file=/etc/kubernetes/ssl/ca-key.pem
–etcd-cafile=/etc/kubernetes/ssl/ca.pem
–etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem
–etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem
–etcd-servers=https://192.168.0.8:2379,https://192.168.0.9:2379,https://192.168.0.10:2379
–enable-swagger-ui=true
–allow-privileged=true
–apiserver-count=3
–audit-log-maxage=30
–audit-log-maxbackup=3
–audit-log-maxsize=100
–audit-log-path=/var/lib/audit.log
–event-ttl=1h
–v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Observed result
Expected result
Comments
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 45 (18 by maintainers)
Works as intended. Read our Access Control guide on wiki pages to find out how it works.
A few thoughts for those who might end up here from search. The reason why my
~/.kube/configyaml file did not work in dashboard 1.8 was because it did not contain a token or a username with password. Searching forNot enough data to create auth info structurein the dashboard’s source code clearly shows that this is what was expected in a file you upload. The same was in @txg1550759’s case.The yaml file I was trying to authenticate with came from
/etc/kubernetes/admin.conf, which was generated by kubeadm 1.7 back in July. I saw other admin files since then that were generated bykops– these did contain a password if I remember correctly. So perhaps the lack of a token or a password inkubeconfigis some kind of a legacy thing or a kubeadm-specific thing, not sure.I ran
kubectl get clusterRolesandkubectl get clusterRoleBindingsand saw an item calledcluster-adminin both. However unlike other role bindings (e.g.tiller-cluster-rule), thecluster-adminone referred to something calledapiGroupinstead ofServiceAccount(to which a token can belong). Check out the difference in the bottom of each output:↓
↓
This suggests that my cluster probably does not have a dedicated ‘root’ service account per se. That’s why
~/.kube/configworks forkubectlwithout having a token or a username and password in it, but does not work for the dashboard.Nevertheless, I could get into the dashboard by authenticating myself as other ServiceAccounts and this worked well. Depending on the privileges of a service account I picked, the dashboard was giving me access to different resources, which is great! Here’s an example of getting a token for the service account called
tillerto authenticate (you’ll have it if you usehelm):↓
↓
Copy token
×××and paste it into the dashboard’s login screen.The useful thing about the
tillerservice account in my case is that it’s bound tocluster-admincluster role (see the yaml above). This is because tiller needs to be able to launch pods, set up ingress rules, edit secrets, etc. Such a role binding is not the case for any cluster, but it may be a default thing in the simple setups. If that’s the case, usingtiller’s token in the dashboard makes you the ‘root‘ user, because this implies that you have thecluster-admincluster role.Finally, my upgrade from dashboard 1.6 to 1.8 can be considered as finished! 😄
All this RBAC stuff is way too advanced for me to be honest, so it can be that I‘ve done something wrong. I guess that a proper solution would be to create a new service account and a new role binding from scratch and then use that token in the dashboard instead of the tiller’s one. However I’ll probably stay with my tiller’s token for some time until I get energy for switching to a proper solution. Could anyone please confirm or correct my thoughts?
Not really sure why no one is posting this in the docs… Use the ‘clusterrole-aggregation-controller’ token to access your dashboard as ‘root’:
Just kind of silly to not include a root account as a part of the dashboard deployment.
Hi everyone, thanks for the discussion , I got spent four days, after that I found the solution for the same: Issue :
Solution 👍
If you con contact me in person : linux.kartik@gmail.com
I get that there are a lot of different ways to connect, it’s just unintuitive. I think the quickstart docs should call out at least one example of how to connect with a ‘root’ account. Taking hours of searching to find one login method makes people want to quit.
@floreks Thanks… this is a much better way of authenticating but I think the reason people have been ending up here has to do with the wiki docs for authentication for dashboard. It goes through the different methods of authentication but probably should at least provide a basic “Usage” section for (if you are new to kubernetes and you just installed dashboard, do the following steps in order to run dashboard 1. create service-account 2. get token 3. put token in your ~/.kube/config in this section of the yaml).
@kachkaev I’m really glad that you actually took time to try and find a solution on your own 😃 I can help you fill out the gaps.
Usually cluster provisioners like
kubeadmorkopsconfigurekubeconfigfile to use certificate-based authentication. This is ok for binary such askubectlbecause it can establish secure connection with API server and be authenticated based on your certificates. This way, however, will not work for web application as your private key should never leave your computer and web app cannot establish such connection askubectl.That is why we have to rely on other methods of authenticating user offered by kubernetes, such as token-based authentication or basic auth (login and password). Second one only works when authorization mode ABAC is enabled and you have some additional arguments passed to apiserver. None of these methods is deprecated. These are just different ways to authenticate user and all of them can be specified in kubeconfig file.
There are even more authentication options. That is why it is highly recommended to read our documentation first, before using Dashboard. In
Introductionsection of ourAccess controlguide, we provide links that should help users get rid of any doubts how Dashboard works. Especially link to Authenticating in Kubernetes.https://github.com/kubernetes/dashboard/wiki/Access-control#introduction
RBAC is generally quite a big topic in kubernetes. I’d recommend reading Using RBAC Authorization guide to find out how to create and configure “user” with required permissions. We are really trying to keep our documentation clear for users and provide all necessary links so they can find out how everything works and how to work with not only Dashboard, but also Kubernetes.
In case you have some more doubts or questions you can ask me. I’ll try to help if I can.
I dont see the correct answer from the internet nor from the official document. here is my research. may this could help you.
First, you need to have a account(ideally you could use the build-in account,
kubectl get clusterrolesto get all build-in account, very recommend to use edit rather than admin)Then, type this command to get the token which would be used in kubeconfig
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep edit | awk '{print $1}')Finally, the structure for the kubeconfig should be like this
those works for me and here is my dashboard version
kubernetes-dashboard-amd64:v1.10.0I used https://github.com/kubernetes/dashboard/wiki/Creating-sample-user but edit the
ClusterRoleBindingto:This gave
kubernetes-dashboardthe admin permissions. Then I took its token with:And used it. It looks like it stuck on the login, after 20 seconds I pressed “skip” and then I have the permissions.
To run on the dashboard I used:
Which is really not recommended because everyone can access it. And then used:
http://<master_ip>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/loginCan we print a more clear error message about Certificate Auth not being supported?
"Not enough data to create auth info structure"implies to the user that the kube config is deficient.Additionally, it might be useful to state upfront that certificate auth is not supported before the user selects a kubeconfig.
It would be cool if when using kubectl proxy, the API server would pass on the information about the logged in user; and the kubernetes dashboard could use this 😃
All the best, Sebastian
https://github.com/kubernetes/dashboard/wiki/Access-control#kubeconfig
@floreks Thanks for the pointers. We’ve used https://github.com/kubernetes/dashboard/wiki/Access-control#getting-token-with-kubectl.
I would like to kindly point out that the workflow is not quite obvious. Repurposing a token randomly picked from a list of service seems rather arbitrary. Would there a chance to package this as a feature of
kubectl. Something likekubectl config get-bearer-token.@txg1550759 were you able to find what the problem was? I also have a
admin.confthat was generated for me by kubeadm, but it does not work. I can dokubectl get podsetc. with it though.@cdennen creating sample user and getting token is also described in the docs.
https://github.com/kubernetes/dashboard/wiki/Creating-sample-user
Wiki describes few ways of accessing Dashboard (
kubectl proxy, NodePort, directly through api server). We will not provide ingress method as it is very custom and user has to decide what tools to use and how to configure them to expose an application.It was already explained many times why certificate-based authentication is not supported. It will not be added for security reasons as the private key should never leave user’s computer.
@floreks - that’s great documentation!
A lot of individual’s workflows look like:
kubectl create -f ...Could you please include information and link on that page? https://github.com/kubernetes/dashboard/wiki/Installation
From reading the abovementioned documentation
I am the one who has the power over the cluster now ))
edit: no, not really, I needed
clusterrolebindingthere@abrahamrhoffman’s solution no longer worked for me in k8s 1.19.2, probably because
clusterrole-aggregation-controlleris no longer an admin (I’m not an expert in k8s, so this is just a speculation).What helped was the creation of a special service account with admin privileges and then printing its token:
UPD: Turns out that the Dashboard docs also include a recipe for creating the service account:
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
Yes I am abke to access but issue is , need to provide token value everytime, when token expired. It has resolved the issue temporary, I’ll work on it for permanent solution.
On Tue 20 Feb, 2018, 18:12 Sebastian Florek, notifications@github.com wrote:
Maybe makes sense to also provide sample dashboard proxy setup since
/uino longer works?That sounds like a good idea to me; the issued tokens could have a relatively small time limit.
Otherwise, I think the “right” way of doing this would be with OpenID. It would be nice if the dashboard itself would redirect to your openid provider, where you could authenticate, and redirect back; but AFAIK you currently need something like kubelogin.
@VampireDaniel That command prints tons of tokens, but which is the correct one?
@VampireDaniel This is probably the answer everyone is looking for. The documentation doesn’t explain how to add the token to the kube config.
https://github.com/kubernetes/dashboard/wiki/Access-control#basic
https://github.com/kubernetes/dashboard/wiki/Access-control#kubeconfig
Structure of kubeconfig file can be seen in official documentation: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
I think it is not recommended to configure multiple auth option for a single user…
@abrahamrhoffman You mean being able to access Dashboard as root like described in here? https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges
Token does not have to be necessarily tied to the Service Account. It can be generated by OIDC or any external identity provider. As long as API server accepts it, then it is ok. There are more ways of configuring and getting “correct” token to log in.
By default Dashboard has now very little permissions that do not impose any security risks anymore (https://github.com/kubernetes/dashboard/wiki/Access-control#v18). I do not see any security threats in exposing Dashboard publicly if everything is configured properly. There is no way to escalate privileges right now, so exposing Dashboard should be no different than exposing API server (secured, with RBAC enabled).
Thank you for replies @maciaszczykm and @floreks! RBAC’s getting slightly clearer over time, thanks to the docs that are constantly improving. I really like the fact that, if installed correctly, the dashboard no longer has admin privileges and so it is possible to give different team members varying permissions. Totally agree that if everyone has
kube-adminrole, things can go wrong pretty quickly!When I ran dashboard 1.6 at https://dashboard.example.com/, I was adding basic auth to the ingress rule to protect the dashboard from strangers – anyone could become an admin of my cluster otherwise. After upgrading to 1.8 with your official yaml, it seems that running https://dashboard.example.com/ is now safe even without any basic auth in ingress. If a hacker gets to that domain, they’ll only be able to know about the existence of the k8s cluster, but not perform any read/write operations on it. Only authenticated token bearers will be able to get the details about the existing resources and change them (as long as a token represents a ServiceAccount with enough privileges). Am a right about this?
I understand that the best option is to keep https://dashboard.example.com/ available only behind a firewall, but am still curious if exposing this resource publicly is OK for simple clusters with non-critical personal projects. A friend of mine has got an opposite opinion, we need to stop our dispute once and for all 😄
Correct. It is recommended way of handling it.
If it works for you it is fine. You should be aware, that everyone with
cluster-adminrole is able to perform critical changes within cluster. That’s why it should be accessible only to small group of people.