helm: Helm says tiller is installed AND could not find tiller
helm init --service-account tiller
$HELM_HOME has been configured at /home/ubuntu/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
Output of helm version:
$ helm version
Client: &version.Version{SemVer:“v2.10.0”, GitCommit:“9ad53aac42165a5fadc6c87be0dea6b115f93090”, GitTreeState:“clean”}
Error: could not find tiller
Output of kubectl version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider/Platform (AKS, GKE, Minikube etc.): AWS / Kops
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 34 (7 by maintainers)
I used
kubectl -n kube-system delete deployment tiller-deployandkubectl -n kube-system delete service/tiller-deploy. Thenhelm --initworked. I was missing removing the service previously.@mabushey solution works, but with
helm initinstead ofhelm --initclosing as a cluster issue, not a helm issue
The Kubernetes cluster works great, I have numerous services running under it. What doesn’t work is helm/tiller.
this issue is a pain for a long time. Thinking to move out of Helm. sigh!!. Every time my pipelines fails because of this.
What’s the output of
kubectl -n kube-system get pods?helm initonly checks that the deployment manifest was submitted to kubernetes. If you want to check for if tiller is live and ready, usehelm init --wait. 😃Have you all tried:
tiller.yaml:
This is part of my
up.shscript for starting my dev cluster from scratch. The--upgradeflag was necessary to allow it to be executed multiple times. I believe the original error about not being able to find tiller is related to it being installed but thetiller-deploy-*pod not being found inkube-system.Interesting. What about
kubectl -n kube-system get deployments? Maybe there’s something wrong where new pods aren’t getting scheduled. Check the status of that deployment and see if something’s up.I came across @psychemedia issue as well.
After running
kubectl -n kube-system describe deployment tiller-deployI had the same output. And if you read carefully @psychemedia output it saysThe important bit is
ReplicaFailure True FailedCreateand the followingNewReplicaSet: tiller-deploy-55bfddb486 (0/1 replicas created).To find what the problem he should have run
(or just
kubectl describe replicaser tiller-deploy-55bfddb486depending if namespace is set or not… you can find it by listing all replicasetskubectl get replicaset --all-namespaces).The reason why the replicaset wasn’t created should have been listed there under
Events:.I actually had the same issue running on a different namespace than
kube-system. See https://github.com/helm/helm/issues/3304#issuecomment-468997006thanks for me i got the same error upon describing the replicaset it gave the error that i had not created the service account. I deleted the deployment(tiller) , created the service account and then reran and it worked
@bacongobbler The problem occurs when Tiller is created without a proper serviceaccount. This happens for two reasons a. the helm init script does not do this as it certainly should b. the namespace in question mismatches with an existing service account definition.
To go arround it you must first run “helm delete” and then create a rbac-config.yaml:
` apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects:
if you require to use a different namespace, make sure it later matches your Tiller installation and that the cluster-admin role exists (it usually does!)
Then $ kubectl create -f rbac-config.yaml serviceaccount “tiller” created clusterrolebinding “tiller” created $ helm init --service-account tiller --history-max 200
And you’re good to go.
mabushey solution , works!
The point is, the error is misleading. THAT is the issue in my eyes.
it is a pain that end of 2021 i ran into that issue
Worked for me by following https://helm.sh/docs/using_helm/#tiller-and-role-based-access-control Just create the yaml and run the command
helm init(without any additional options) was the ticket for me which installed/setup tiller. all is well after that.