terraform-provider-helm: Terraform.io to EKS "Error: Kubernetes cluster unreachable"

Terraform Version

0.12.19

Affected Resource(s)

  • helm_release

Terraform Configuration Files


locals {
 kubeconfig = <<KUBECONFIG
apiVersion: v1
clusters:
- cluster:
    server: ${aws_eks_cluster.my_cluster.endpoint}
    certificate-authority-data: ${aws_eks_cluster.my_cluster.certificate_authority.0.data}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "${aws_eks_cluster.my_cluster.name}"
KUBECONFIG
}

resource "local_file" "kubeconfig" {
  content  = local.kubeconfig
  filename = "/home/terraform/.kube/config"
}

resource "null_resource" "custom" {
  depends_on    = [local_file.kubeconfig]

  # change trigger to run every time
  triggers = {
    build_number = "${timestamp()}"
  }

  # download kubectl
  provisioner "local-exec" {
    command = <<EOF
      set -e

      curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
      chmod +x aws-iam-authenticator
      mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin

      echo $PATH

      aws-iam-authenticator

      curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
      chmod +x kubectl

      ./kubectl get po
    EOF
  }
}

resource "helm_release" "testchart" {
  depends_on    = [local_file.kubeconfig]
  name          = "testchart"
  chart         = "../../../resources/testchart"
  namespace     = "default"
}

Debug Output

Note that

  • kubectl get po reaches the cluster and reports “No resources found in default namespace.”
  • while helm_release reports: “Error: Kubernetes cluster unreachable”
  • In earlier testing it errored with “Error: stat /home/terraform/.kube/config”. Now that I write the local file to that location, it no longer errors. I assume that means it successfully reads the kube config.

https://gist.github.com/eeeschwartz/021c7b0ca66a1b102970f36c42b23a59

Expected Behavior

The testchart is applied

Actual Behavior

The helm provider is unable to reach the EKS cluster.

Steps to Reproduce

On terraform.io:

  1. terraform apply

Important Factoids

Note that kubectl is able to communicate with the cluster. But something about the terraform.io environment, the .helm/config, or the helm provider itself renders the cluster unreachable.

Note of Gratitude

Thanks for all the work getting helm 3 support out the door. Holler if I’m missing anything obvious or can help diagnose further.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 6
  • Comments: 29 (4 by maintainers)

Most upvoted comments

The token auth configuration below ultimately worked for me. Perhaps this should be the canonical approach for Terraform Cloud -> EKS, rather than using ~/.kube/config

provider "aws" {
  region = "us-east-1"
}

data "aws_eks_cluster_auth" "cluster-auth" {
  depends_on = [aws_eks_cluster.my_cluster]
  name       = aws_eks_cluster.my_cluster.name
}

provider "helm" {
  alias = "my_cluster"
  kubernetes {
    host                   = aws_eks_cluster.my_cluster.endpoint
    cluster_ca_certificate = base64decode(aws_eks_cluster.my_cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster-auth.token
    load_config_file       = false
  }
}

resource "helm_release" "testchart" {
  provider  = helm.my_cluster
  name       = "testchart"
  chart      = "../../../resources/testchart"
  namespace  = "default"
}

I don’t see how ths could possibly work, with Helm 3, it seems to be completely broken. Below is my configuration and I can’t connect to the cluster. My kubernetes provider works but not the kubernetes block within Helm which has the same settings.

data "aws_eks_cluster" "cluster" {
  name = "foobar"
}

data "aws_eks_cluster_auth" "cluster" {
  name = "foobar"
}

provider "kubernetes" {
  version                = "1.10.0"
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
}

provider "helm" {
  version                = "1.0.0"

  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster.token
    load_config_file       = false
  }
}```

My workaround is to refresh data.aws_eks_cluster_auth before apply

terraform refresh -target=data.aws_eks_cluster_auth.cluster
terraform apply -target=helm_release.helm-operator -refresh=false

Same issue here with Helm3

provider "helm" {
  version = "~> 1.2.3"

  kubernetes {
    load_config_file       = false
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.kubernetes_token.token
  }
}

I have created deployed a helm chart via helm provider ages ago. It works fine, I can change things here and there, etc. Today I wanted to “migrate” a standalone-deployed helm chart to be managed under terraform. So when I tried to run terraform import helm_release.chart namespace/chart, I’ve got this error.

Seeing same issue using Helm3. My tf looks like as @kinihun … Its happens in the first run of “terraform apply”, when I try to exec again everything goes well.

I do have the same problem, I’ve tried all of the mentioned solutions but it doesn’t seem to pick up the token properly.

Terraform v0.14.4 hashicorp/helm v2.0.3

this is my config:

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)

    exec {
      api_version = "client.authentication.k8s.io/v1alpha1"
      args        = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster_auth.cluster.id]
      command     = "aws"
    }
  }
}

any thoughts?

After unsetting ENV vars for kubectl , that were pointing to the old cluster everything worked:

unset KUBECONFIG
unset KUBE_CONFIG_PATH

Not sure why helm provider reads those vars if following setup was used:

provider "helm" {  
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
    token                  = data.aws_eks_cluster_auth.cluster.token  
  }
}

Using 2.0.2 provider versions that don’t have “load_config_file” argument available anymore.

I followed the instruction https://github.com/hashicorp/terraform-provider-helm/issues/400#issuecomment-583561090 provided by @eeeschwartz in this thread. It would fail for the first apply and work second time. The only thing that i missed was was adding “depends_on = [aws_eks_cluster.my_cluster]” to the data resource as mentioned in the code snippet. Once i added it started working. I created and destroyed the deployment multiple times and it worked.

data “aws_eks_cluster_auth” “cluster-auth” { // Add the depends_on name = aws_eks_cluster.my_cluster.name }