helm: Helm says tiller is installed AND could not find tiller

helm init --service-account tiller
$HELM_HOME has been configured at /home/ubuntu/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!

Output of helm version: $ helm version Client: &version.Version{SemVer:“v2.10.0”, GitCommit:“9ad53aac42165a5fadc6c87be0dea6b115f93090”, GitTreeState:“clean”} Error: could not find tiller

Output of kubectl version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): AWS / Kops

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 34 (7 by maintainers)

Most upvoted comments

I used kubectl -n kube-system delete deployment tiller-deploy and kubectl -n kube-system delete service/tiller-deploy. Then helm --init worked. I was missing removing the service previously.

@mabushey solution works, but with helm init instead of helm --init

closing as a cluster issue, not a helm issue

The Kubernetes cluster works great, I have numerous services running under it. What doesn’t work is helm/tiller.

$ kubectl -n kube-system get deployments
NAME             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
coredns          2         2         2            2           7d
dns-controller   1         1         1            1           7d
tiller-deploy    1         0         0            0           7d

this issue is a pain for a long time. Thinking to move out of Helm. sigh!!. Every time my pipelines fails because of this.

What’s the output of kubectl -n kube-system get pods?

helm init only checks that the deployment manifest was submitted to kubernetes. If you want to check for if tiller is live and ready, use helm init --wait. 😃

Have you all tried:

kubectl apply -f tiller.yaml
helm init --service-account tiller --upgrade

tiller.yaml:

kind: ServiceAccount
apiVersion: v1
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

This is part of my up.sh script for starting my dev cluster from scratch. The --upgrade flag was necessary to allow it to be executed multiple times. I believe the original error about not being able to find tiller is related to it being installed but the tiller-deploy-* pod not being found in kube-system.

Interesting. What about kubectl -n kube-system get deployments? Maybe there’s something wrong where new pods aren’t getting scheduled. Check the status of that deployment and see if something’s up.

I came across @psychemedia issue as well.

After running kubectl -n kube-system describe deployment tiller-deploy I had the same output. And if you read carefully @psychemedia output it says

...

Conditions:
  Type             Status  Reason
  ----             ------  ------
  Available        True    MinimumReplicasAvailable
  ReplicaFailure   True    FailedCreate
  Progressing      False   ProgressDeadlineExceeded
OldReplicaSets:    <none>
NewReplicaSet:     tiller-deploy-55bfddb486 (0/1 replicas created)
Events:            <none>

The important bit is ReplicaFailure True FailedCreate and the following NewReplicaSet: tiller-deploy-55bfddb486 (0/1 replicas created).

To find what the problem he should have run

kubectl -n kube-system describe replicaser tiller-deploy-55bfddb486

(or just kubectl describe replicaser tiller-deploy-55bfddb486 depending if namespace is set or not… you can find it by listing all replicasets kubectl get replicaset --all-namespaces).

The reason why the replicaset wasn’t created should have been listed there under Events:.

I actually had the same issue running on a different namespace than kube-system. See https://github.com/helm/helm/issues/3304#issuecomment-468997006

thanks for me i got the same error upon describing the replicaset it gave the error that i had not created the service account. I deleted the deployment(tiller) , created the service account and then reran and it worked

@bacongobbler The problem occurs when Tiller is created without a proper serviceaccount. This happens for two reasons a. the helm init script does not do this as it certainly should b. the namespace in question mismatches with an existing service account definition.

To go arround it you must first run “helm delete” and then create a rbac-config.yaml:

` apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects:

  • kind: ServiceAccount name: tiller namespace: kube-system `

if you require to use a different namespace, make sure it later matches your Tiller installation and that the cluster-admin role exists (it usually does!)

Then $ kubectl create -f rbac-config.yaml serviceaccount “tiller” created clusterrolebinding “tiller” created $ helm init --service-account tiller --history-max 200

And you’re good to go.

mabushey solution , works!

The point is, the error is misleading. THAT is the issue in my eyes.

it is a pain that end of 2021 i ran into that issue

Worked for me by following https://helm.sh/docs/using_helm/#tiller-and-role-based-access-control Just create the yaml and run the command

helm init (without any additional options) was the ticket for me which installed/setup tiller. all is well after that.