helm: helm list failed

helm list Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%3DTILLER: dial tcp [::1]:8080: getsockopt: connection refused

Output of helm version:2.8.2、2.5.0

Output of kubectl version:1.6.2

Cloud Provider/Platform (AKS, GKE, Minikube etc.):

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 27 (8 by maintainers)

Most upvoted comments

@bacongobbler

The docs say: Helm is tested and known to work with minikube. It requires no additional configuration.

In reality this does not work today per my earlier comment, and fails with the same error everyone else is getting:

$ minikube start
$ helm init
$ helm list

Either the tooling is wrong, or the documentation is wrong. Either way I think this warrants an issue and a solution as I am unable to use helm with minikube atm.

  • Edit: Unless I am misunderstanding these other issues and this is in fact the same root problem

This is still an issue in 2.9.1 as well.

Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
[debug] Created tunnel using local port: '41812'

[debug] SERVER: "127.0.0.1:41812"

Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 127.0.0.1:8080: connect: connection refused

Same problem with:

  • minikube v0.25.2
  • kubernetes v1.9.4
  • helm v2.9.0-rc3

To reproduce:

$ minikube start
$ helm init
$ helm list

output:

Starting local Kubernetes v1.9.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
$HELM_HOME has been configured at /home/lrvick/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 127.0.0.1:8080: connect: connection refused

@muratsplat They are suggesting that KUBERNETES_MASTER needs to be set on the tiller pod. However, this doesn’t appear to be the case at all. We have a K8S cluster & helm deployment that works just fine in AWS and that env variable is nowhere to be found. Only difference between the working deployment of Helm/Tiller and the not working one is the version.

I’m going to roll back to an earlier version of Helm in my GKE cluster later and see if that fixes the issue.

For those seeing this issue and are using v2.9.0, please try

helm init --service-account default

and see if that works. See #3990 for more context.

@all I tried many methods and finally resolve this issue by this workaround: you should build the image of tiller to add ENV in the image, and the dockerfile is in below: FROM gcr.io/kubernetes-helm/tiller:v2.3.1 ENV KUBERNETES_SERVICE_HOST https://YourKubeMasterHostIP:PORT (I think this line is optional) ENV KUBERNETES_MASTER YourKubeMasterHostIP:8080 (I think this line is mandatory)

you can modify this dockerfile to fulling your k8s cluster.