kops: kops cluster with RBAC enabled is failing
I created a kubernetes cluster (v1.5.3) with a master and 3 nodes using kops v1.5.1 on AWS. I added the following additional config to support RBAC (Before doing kops update cluster --yes).
kubeAPIServer: authorizationMode: RBAC,AlwaysAllow authorizationRbacSuperUser: admin runtimeConfig: rbac.authorization.k8s.io/v1alpha1: "true"
Cluster was working fine. I added necessary serviceaccounts, clusterroles, clusterrolebindings etc…
Now, I removed “AlwaysAllow” string from the authorizationMode in the kops config file and tried to do update / rolling-update. Nothing happened (I think kops hasn’t added that feature to detect these changes). So, I tried doing the force rolling update (using --force) and then all the master / nodes were recreated and the serviceaccounts are working as expected (I guess). But the nodes are not bound to the cluster anymore. (When I did kubectl get nodes -o wide only the master was getting displayed. But I can see all 3 nodes in the nodes autoscaling group). I thought this force rolling update broke the cluster networking so I tried creating a fresh cluster, this time directly with RBAC (with out AlwaysAllow) and I faced the same issue with the nodes. Only master was accessible using ‘kubectl’.
I also tried editing the instance groups for increasing / decreasing the node count and the issue still remained. Autoscaling group is adding / removing the instances but none of them were accessible using kubectl.
I just want to know whether this behavior is expected (is this going to be fixed in the future versions?). If not (If I am doing something wrong), what is the best way to enable RBAC on a cluster installed with kops.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 1
- Comments: 21 (11 by maintainers)
that’s a lot more targeted, yes. for the first subject, this should work as well:
@liggitt @sethpollack @chrislovecnm Thank you all
Adding this fixed all my issues
if you have pods that access the API (like the kube-dns pod does), you need to give some permissions (not cluster-admin, generally) to that pod’s service account.
the role needed by kube-dns is at https://github.com/kubernetes/kubernetes/pull/38816/files#diff-0dd2098231b4213ca11a4c4734757936 (and is included by default in 1.6)
more detailed RBAC doc is in progress for 1.6 at https://github.com/kubernetes/kubernetes.github.io/pull/2618 (preview at https://deploy-preview-2618--kubernetes-io-master-staging.netlify.com/docs/admin/authorization/rbac/). The concepts all apply to 1.5, though the version (v1beta1), the command-line helpers, and many of the default roles are new in 1.6
If its auth related, maybe @liggitt can help?
No you don’t need to add
name: system:authenticated, It’s probably a bad idea, I just used it as a shortcut for now.All you should need is the
kube-controller-manageruser.In kubernetes 1.6 RBAC is moving to beta, will have default roles/bindings, and will start each controller with a distinct service account.
For 1.5, the equivalent would be granting cluster-admin to the
kube-controller-manageruser.I am waiting for 1.6 to fix up my roles, and just added
name: system:authenticatedto cluster-admin for now.