k3s: Error: Kubernetes cluster unreachable with helm 3.0

Version: k3s version v1.0.0 (18bd921c)

Describe the bug I want to use helm version 3 with k3s but i when type helm install stable/postgresql --generate-name for example i get: Error: Kubernetes cluster unreachable

To Reproduce

  1. Installed helm 3 with script](https://helm.sh/docs/intro/install/#from-script).
  2. Add repo with helm repo add stable https://kubernetes-charts.storage.googleapis.com/
  3. Update repo with helm repo update
  4. Install posgresql-chart with helm install stable/postgresql --generate-name

Expected behavior Installation should work.

Actual behavior Error: Kubernetes cluster unreachable

Additional context

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 46
  • Comments: 25 (2 by maintainers)

Most upvoted comments

Try setting the KUBECONFIG environment variable. export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

If you add “-v 20” to your helm command line it will show it’s connecting to port 8080. Running this seems to fix it: kubectl config view --raw >~/.kube/config

This lets helm use the same config kubectl is using I think.

This resolved the error message for me.

sudo helm install harbor/harbor --version 1.3.0 --generate-name --kubeconfig /etc/rancher/k3s/k3s.yaml

For microk8s, the k8s config can be generated by this command: microk8s.kubectl config view --raw > ~/.kube/config

If you add “-v 20” to your helm command line it will show it’s connecting to port 8080. Running this seems to fix it: kubectl config view --raw >~/.kube/config

This lets helm use the same config kubectl is using I think.

can confirm this solution works for me as well

If you are using sudo, be aware that this command doesn’t preserve environment variables (such as KUBECONFIG) by default when switching to a different context.

If you wish to preserve specific environment variables when using sudo then:

cat << EOF > /etc/sudoers.d/env
Defaults env_keep += "http_proxy https_proxy no_proxy"
Defaults env_keep += "HTTP_PROXY HTTPS_PROXY NO_PROXY"
Defaults env_keep += "KUBECONFIG"
EOF

Same issue here on k3s version v1.0.0 (18bd921c).

If you are using sudo, be aware that this command doesn’t preserve environment variables (such as KUBECONFIG) by default when switching to a different context.

If you wish to preserve specific environment variables when using sudo then:

cat << EOF > /etc/sudoers.d/env
Defaults env_keep += "http_proxy https_proxy no_proxy"
Defaults env_keep += "HTTP_PROXY HTTPS_PROXY NO_PROXY"
Defaults env_keep += "KUBECONFIG"
EOF

Just use sudo -E, which will preserve the environment variables.

The fix @grawin posted doesn’t worked for me either, i’am using a ubuntu 18.04 system.

I tried this command,

kubectl config view --raw >~/.kube/config

but after running this config file became empty.

Can anyone suggest how to recover my config file with all values?

@poojabolla… Its gone, you must use >> instead > on appending something in existing file

I got this issue when using azure kubernetes

az aks get-credentials -n myCluster -g myResourceGroup The config file is autogenerated and placed in ‘~/.kube/config’ file as per OS

@lpossamai are you using k3s? Looks like you’re set up to use EKS…

@poojabolla… Its gone, you must use >> instead > on appending something in existing file

@rubiktubik looks like helm can’t reach the k3s cluster, can you try to use --kubeconfig with helm command or using ~/.kube/config as @sixcorners suggested, please reopen the issue if the problem still persists.