cert-manager: Helm chart fails to install with RBAC error on GKE

/kind bug

What happened:

  1. helm init
  2. git clone https://github.com/jetstack/cert-manager
  3. cd cert-manager
  4. helm install --name cert-manager --namespace kube-system contrib/charts/cert-manager
  5. See error:

Error: release cert-manager failed: clusterroles.rbac.authorization.k8s.io “cert-manager-cert-manager” is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:[“certificates”], APIGroups:[“certmanager.k8s.io”], Verbs:[““]} PolicyRule{Resources:[“issuers”], APIGroups:[“certmanager.k8s.io”], Verbs:[””]} PolicyRule{Resources:[“clusterissuers”], APIGroups:[“certmanager.k8s.io”], Verbs:[““]} PolicyRule{Resources:[“secrets”], APIGroups:[”“], Verbs:[””]} PolicyRule{Resources:[“events”], APIGroups:[“”], Verbs:[““]} PolicyRule{Resources:[“endpoints”], APIGroups:[”“], Verbs:[””]} PolicyRule{Resources:[“services”], APIGroups:[“”], Verbs:[““]} PolicyRule{Resources:[“pods”], APIGroups:[”“], Verbs:[””]} PolicyRule{Resources:[“ingresses”], APIGroups:[“extensions”], Verbs:[“”]}] user=&{system:serviceaccount:kube-system:default 6ee23ef4-fb0f-11e7-a397-42010a80014e [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[PolicyRule{Resources:[“selfsubjectaccessreviews”], APIGroups:[“authorization.k8s.io”], Verbs:[“create”]} PolicyRule{NonResourceURLs:[“/api” “/api/” “/apis” “/apis/" “/healthz” “/swaggerapi” "/swaggerapi/” “/version”], Verbs:[“get”]}] ruleResolutionErrors=[]

What you expected to happen: It to succeed

How to reproduce it (as minimally and precisely as possible):

This is a GKE cluster (version: 1.7.11-gke.1):

gcloud container clusters create certmgrtest

Environment:

  • Kubernetes version (use kubectl version): v1.7.11-gke.1
  • Cloud provider or hardware configuration: GKE
  • Install tools: Helm v2.7.2

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 2
  • Comments: 19 (6 by maintainers)

Commits related to this issue

Most upvoted comments

It looks like you may have deployed tiller without RBAC support - the full docs on this are here: https://github.com/kubernetes/helm/blob/master/docs/rbac.md

The tl;dr - you need to grant the tiller service account the cluster-admin role. I usually do this with:

$ kubectl create serviceaccount -n kube-system tiller
$ kubectl create clusterrolebinding tiller-binding --clusterrole=cluster-admin --serviceaccount kube-system:tiller
$ helm init --service-account tiller

This started becoming an issue out of the box with GKE when the default service account in kube-system dropped the cluster-admin role by default (which I guess was 1.7).

EDIT: must be 1.7 as you are on 1.7 😄

For anybody running into this, I followed every other example and nothing worked until I read this

TLDR - I needed to create extra role binding for kube-system:default

kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default

ahhhh i checked the sourcecode of the templates - it seems like its {{- if .Values.rbac.enabled -}} - rbac.enabled instead of rbac.create

when using rbac.enabled=false the deployment now does work for me.

I got it working by adding --set rbac.create=false but I think RBAC-enabled mode should work too.

Whoa this is too complicated. I expected something like helm init --with-proper-rbac. But I guess what’s above will do.

I think it happened around 1.7.

It might be worth documenting this.