helm: Error: no available release name found

Hi folks i just don’t have any clue what is going wrong.

after the first time trying to run:

$ helm install stable/mongodb-replicaset
Error: no available release name found

i “disabled” RBAC

kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts 

but nothing have changed:

$ helm install stable/mongodb-replicaset
Error: no available release name found

kubernetes

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

helm

$ helm version
Client: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}

helms repos

$ helm search | grep mongo
stable/mongodb               	0.4.17 	NoSQL document-oriented database that stores JS...
stable/mongodb-replicaset    	2.1.2  	NoSQL document-oriented database that stores JS...

tiller pod

$ kubectl get pods --all-namespaces | grep tiller
kube-system   tiller-deploy-5cd755f8f-c8nnl               1/1       Running   0          22m

tiller log

[tiller] 2017/10/23 19:12:50 preparing install for
[storage] 2017/10/23 19:12:50 getting release "busted-shark.v1"
[storage/driver] 2017/10/23 19:13:20 get: failed to get "busted-shark.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/busted-shark.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/23 19:13:20 info: generated name busted-shark is taken. Searching again.
[storage] 2017/10/23 19:13:20 getting release "lucky-rabbit.v1"
[storage/driver] 2017/10/23 19:13:50 get: failed to get "lucky-rabbit.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/lucky-rabbit.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/23 19:13:50 info: generated name lucky-rabbit is taken. Searching again.
[storage] 2017/10/23 19:13:50 getting release "exiled-lynx.v1"
[storage/driver] 2017/10/23 19:14:20 get: failed to get "exiled-lynx.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/exiled-lynx.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/23 19:14:20 info: generated name exiled-lynx is taken. Searching again.
[storage] 2017/10/23 19:14:20 getting release "eloping-echidna.v1"
[storage/driver] 2017/10/23 19:14:50 get: failed to get "eloping-echidna.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/eloping-echidna.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/23 19:14:50 info: generated name eloping-echidna is taken. Searching again.
[storage] 2017/10/23 19:14:50 getting release "soft-salamander.v1"
[storage/driver] 2017/10/23 19:15:20 get: failed to get "soft-salamander.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/soft-salamander.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/23 19:15:20 info: generated name soft-salamander is taken. Searching again.
[tiller] 2017/10/23 19:15:20 warning: No available release names found after 5 tries
[tiller] 2017/10/23 19:15:20 failed install prepare step: no available release name found

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 12
  • Comments: 27 (2 by maintainers)

Most upvoted comments

Per https://github.com/kubernetes/helm/issues/2224#issuecomment-356344286, the following commands resolved the error for me:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

helm reset && helm init didn’t work for me, nor did the RBAC solutions above. Finally got it working again by deleting Tiller and then using the suggestion in https://github.com/kubernetes/helm/issues/3055#issuecomment-385296641:

kubectl delete deployment tiller-deploy --namespace kube-system
helm init --upgrade --service-account default

@viane Try the following steps. (You’ll probably need to kubectl delete the tiller service and deployment.)

$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller

That fixed it for me.

This worked for me as I tried to helm install redis: kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller helm init --service-account tiller --upgrade helm update repo . # This was the last piece to the puzzle helm install stable/redis --version 3.3.5

after many approaches, finally, this worked for me, thanks!

kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p ‘{“spec”:{“template”:{“spec”:{“serviceAccount”:“tiller”}}}}’

Issue appears and the solution mentioned is not working for:

Kube Client Version: 1.10.1 Kube Server Version: 1.10.1 Helm Client: "v2.9.0" Helm Server: "v2.9.0"

Also by executing helm list with minikue on, I got error of Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 127.0.0.1:8080: connect: connection refused

Below instructions solved my problem as well for helm v2.11.0 and kube 1.12.1 versions.

$ kubectl create serviceaccount --namespace kube-system tiller $ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller $ helm init --service-account tiller

None of the above solutions worked for me, but the instructions at the following link did.

https://scriptcrunch.com/helm-error-no-available-release/

None of the above mentioned solution is working.

$ kubectl version Client Version: version.Info{Major:“1”, Minor:“12”, GitVersion:“v1.12.4”, GitCommit:“f49fa022dbe63faafd0da106ef7e05a29721d3f1”, GitTreeState:“clean”, BuildDate:“2018-12-14T07:10:00Z”, GoVersion:“go1.10.4”, Compiler:“gc”, Platform:“darwin/amd64”} Server Version: version.Info{Major:“1”, Minor:“13”, GitVersion:“v1.13.2”, GitCommit:“cff46ab41ff0bb44d8584413b598ad8360ec1def”, GitTreeState:“clean”, BuildDate:“2019-01-10T23:28:14Z”, GoVersion:“go1.11.4”, Compiler:“gc”, Platform:“linux/amd64”}

$ helm version Client: &version.Version{SemVer:“v2.12.3”, GitCommit:“eecf22f77df5f65c823aacd2dbd30ae6c65f186e”, GitTreeState:“clean”} Server: &version.Version{SemVer:“v2.12.3”, GitCommit:“eecf22f77df5f65c823aacd2dbd30ae6c65f186e”, GitTreeState:“clean”}

$ kubectl create serviceaccount --namespace kube-system tiller Error from server (AlreadyExists): serviceaccounts “tiller” already exists Ravis-MacBook-Pro-2:.kube ravi$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io “tiller-cluster-rule” already exists Ravis-MacBook-Pro-2:.kube ravi$ helm init --service-account tiller --upgrade $HELM_HOME has been configured at /Users/ravi/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version. Happy Helming! Ravis-MacBook-Pro-2:.kube ravi$ helm update repo Command “update” is deprecated, use ‘helm repo update’

Hang tight while we grab the latest from your chart repositories… …Skip local chart repository …Successfully got an update from the “stable” chart repository Update Complete. ⎈ Happy Helming!⎈

Ravis-MacBook-Pro-2:.kube ravi$ helm install stable/redis Error: no available release name found

Why does it take so long for Error: no available release name found to show up? It honestly takes 5 minutes for me to get the error message, so the 40,000 things I have to try to get it to work take 5m*40,000

I encountered the same issue. then I tried following

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

with the kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' I got the message “Error from server (BadRequest): invalid character ‘s’ looking for beginning of object key string”

and then I tried following commands

$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller

I got the message: failed: clusterroles.rbac.authorization.k8s.io .... [clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

Please help me!.. Below is my information: helm version

Client: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}

kubectl version

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

minikube version minikube version: v0.25.0

The strange is I was using Helm to install stable/nginx-ingress on May 9 and successfully, then I deleted Kubernetes (for practice), then re-install Kubernetes today and install stable/nginx-ingress again … Ops got above error.

Thank you so much for your support in advance

I think it is really important to add this somewhere in the guide. AKS on azure doesn’t provide default cluster-admin role and a user has to create it. https://github.com/jenkins-x/jx/issues/485#issuecomment-376804810 this was also the case on ACS as we can see here: https://github.com/Azure/acs-engine/issues/1892#issuecomment-353960778

The above 3 lines resolved this for me as well. kubectl client: 1.9.6 kubectl server: 1.8.7 helm client: 2.8.2 helm server: 2.8.2

I tried all the above options in vain and the one suggested by rangapv worked for me. Thank you.

The same way but with terraform.

  resource "kubernetes_service_account" "tiller" {
    metadata {
      name = "tiller"
      namespace = "kube-system"
    }
  }

  resource "kubernetes_cluster_role_binding" "tiller-cluster-rule" {

    metadata {
      name = "tiller-cluster-rule"
    }

    role_ref {
      kind = "ClusterRole"
      name = "cluster-admin"
      api_group = "rbac.authorization.k8s.io"
    }

    subject {
      kind = "ServiceAccount"
      namespace = "kube-system"
      name = "tiller"
      api_group = ""
    }

    provisioner "local-exec" {
      command = "helm init --service-account tiller"
    }
  }

sudo iptables -P FORWARD ACCEPT

The above command is all I had to do to get rid of the error… none of the other solution seemed to work for me.

Regards Ranga

@nguyenhuuloc304 I ran into the same issue. I had to make the cluster-admin ClusterRole.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: null
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: cluster-admin
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
  verbs:
  - '*'

@viane try helm init --service-account default; it’s another ticket but it results in the same generic error.