helm: Helm 2.2.3 not working properly with kubeadm 1.6.1 default RBAC rules

When installing a cluster for the first time using kubeadm v1.6.1, the initialization defaults to setting up RBAC controlled access, which messes with permissions needed by Tiller to do installations, scan for installed components, and so on. helm init works without issue, but helm list, helm install, and so on all do not work, citing some missing permission or another.

A work-around for this is to create a service account, add the service account to the tiller deployment, and bind that service account to the ClusterRole cluster-admin. If that is how it should work out of the box, then those steps should be part of helm init. Ideally, a new ClusterRole should be created based on the privileges of the user instantiating the Tiller instance, but that could get complicated very quickly.

At the very least, there should be some word in the documentation so that users installing helm using the instructions included within won’t be wondering why they can’t install anything.

Specific steps for my workaround:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl edit deploy --namespace kube-system tiller-deploy #and add the line serviceAccount: tiller to spec/template/spec

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 55
  • Comments: 41 (14 by maintainers)

Commits related to this issue

Most upvoted comments

To automate the workaround, here’s a non-interactive version of the temporary fix described in the first comment here, using patch instead of edit:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

After running helm init, helm list and helm install stable/nginx-ingress caused the following errors for me in kubernentes 1.8.4:

# helm list
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"

# helm install stable/nginx-ingress
Error: no available release name found

Thanks to @kujenga! The following commands resolved the errors for me and helm list and helm install work fine after running the following commands:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

Adding a temporary alternate solution for automation and @MaximF

For the Katacoda scenario (https://www.katacoda.com/courses/kubernetes/helm-package-manager), we didn’t want users having to use kubectl edit to see the benefit of Helm.

Instead, we “disable” RBAC using the command kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts;

Thanks to the Weave Cortex team for the command (https://github.com/weaveworks/cortex/issues/392).

Elsewhere I proposed the idea—casually—of adding an option to helm init to allow specifying the service account name that Tiller should use.

in case you run the command "kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller " and you get error below : Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User $username cannot create clusterrolebindings.rbac.authorization.k8s.io at the cluster scope: Required “container.clusterRoleBindings.create” permission.

do the following :

  1. gcloud container clusters describe <cluster_name> --zone <zone> look for the password and user name in the output and copy it and then run the same command but this time with admin username and password :
  2. kubectl --username=“copied username” --password=“copied password” create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

I’ve done a bunch of testing, now, and I agree with @seh. The right path forward seems to be create the necessary RBAC artifacts during helm init, but give flags for overriding this behavior.

I would suggest that…

  • By default, we create service account, binding, and add that to the deployment
  • We add only the flag --service-account, which, if specified, skips creating the sa and binding, and ONLY modifies the serviceAccount field on Tiller.

Thus, the “conscientious administrator” will be taking upon themselves the task of setting up their own role bindings and service accounts.

Any update on this for 2.5.0?

@bobbychef64 $ kubectl api-versions|grep rbac

@seh Any chance you could whip up a quick entry in the docs/install_faq.md to summarize the RBAC advice from above?

Helm 2.4.0 will ship (later today) with the helm init --service-account=ACCOUNT_NAME flag, but we punted on defining a default SA/Role. That probably is something people ought to do on their own. Or at least that is our current operating assumption.

Does tiller need cluster-admin permissions? Does it makes sense to maintain/document a least-privileged role that is specific to tiller, which only gives access to the endpoints it needs?

If we create the binding for the service account, presumably we’ll create a ClusterRoleBinding granting the “cluster-admin” ClusterRole to Tiller’s service account. We should document, though, that it’s possible to use Tiller with more restrictive permissions, depending on what’s contained in the charts you’ll install. In some cases, for a namespace-local Tiller deployment, even the “edit” ClusterRole bound via RoleBinding would be sufficient.

Not working on Kubrnetes V1.9.0 Client Version: version.Info{Major:“1”, Minor:“9”, GitVersion:“v1.9.0”, GitCommit:"925c127ec6b946659ad0fd596fa959be4 GitTreeState:“clean”, BuildDate:“2017-12-15T21:07:38Z”, GoVersion:“go1.9.2”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“9”, GitVersion:“v1.9.0”, GitCommit:"925c127ec6b946659ad0fd596fa959be4 GitTreeState:“clean”, BuildDate:“2017-12-15T20:55:30Z”, GoVersion:“go1.9.2”, Compiler:“gc”, Platform:“linux/amd64”} helm version Client: &version.Version{SemVer:“v2.7.2”, GitCommit:“8478fb4fc723885b155c924d1c8c410b7a9444e6”, GitTreeState:“clean”} Server: &version.Version{SemVer:“v2.7.2”, GitCommit:“8478fb4fc723885b155c924d1c8c410b7a9444e6”, GitTreeState:“clean”}

helm list Error: Unauthorized

helm install stable/nginx-ingress Error: no available release name found