terraform-provider-helm: Terraform.io to EKS "Error: Kubernetes cluster unreachable"
Terraform Version
0.12.19
Affected Resource(s)
- helm_release
Terraform Configuration Files
locals {
kubeconfig = <<KUBECONFIG
apiVersion: v1
clusters:
- cluster:
server: ${aws_eks_cluster.my_cluster.endpoint}
certificate-authority-data: ${aws_eks_cluster.my_cluster.certificate_authority.0.data}
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "${aws_eks_cluster.my_cluster.name}"
KUBECONFIG
}
resource "local_file" "kubeconfig" {
content = local.kubeconfig
filename = "/home/terraform/.kube/config"
}
resource "null_resource" "custom" {
depends_on = [local_file.kubeconfig]
# change trigger to run every time
triggers = {
build_number = "${timestamp()}"
}
# download kubectl
provisioner "local-exec" {
command = <<EOF
set -e
curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
chmod +x aws-iam-authenticator
mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin
echo $PATH
aws-iam-authenticator
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x kubectl
./kubectl get po
EOF
}
}
resource "helm_release" "testchart" {
depends_on = [local_file.kubeconfig]
name = "testchart"
chart = "../../../resources/testchart"
namespace = "default"
}
Debug Output
Note that
kubectl get poreaches the cluster and reports “No resources found in default namespace.”- while helm_release reports: “Error: Kubernetes cluster unreachable”
- In earlier testing it errored with “Error: stat /home/terraform/.kube/config”. Now that I write the local file to that location, it no longer errors. I assume that means it successfully reads the kube config.
https://gist.github.com/eeeschwartz/021c7b0ca66a1b102970f36c42b23a59
Expected Behavior
The testchart is applied
Actual Behavior
The helm provider is unable to reach the EKS cluster.
Steps to Reproduce
On terraform.io:
terraform apply
Important Factoids
Note that kubectl is able to communicate with the cluster. But something about the terraform.io environment, the .helm/config, or the helm provider itself renders the cluster unreachable.
Note of Gratitude
Thanks for all the work getting helm 3 support out the door. Holler if I’m missing anything obvious or can help diagnose further.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 6
- Comments: 29 (4 by maintainers)
The token auth configuration below ultimately worked for me. Perhaps this should be the canonical approach for Terraform Cloud -> EKS, rather than using
~/.kube/configI don’t see how ths could possibly work, with Helm 3, it seems to be completely broken. Below is my configuration and I can’t connect to the cluster. My kubernetes provider works but not the kubernetes block within Helm which has the same settings.
My workaround is to refresh
data.aws_eks_cluster_authbefore applySame issue here with Helm3
I have created deployed a helm chart via helm provider ages ago. It works fine, I can change things here and there, etc. Today I wanted to “migrate” a standalone-deployed helm chart to be managed under terraform. So when I tried to run
terraform import helm_release.chart namespace/chart, I’ve got this error.Seeing same issue using Helm3. My tf looks like as @kinihun … Its happens in the first run of “terraform apply”, when I try to exec again everything goes well.
I do have the same problem, I’ve tried all of the mentioned solutions but it doesn’t seem to pick up the token properly.
Terraform v0.14.4 hashicorp/helm v2.0.3
this is my config:
any thoughts?
After unsetting ENV vars for kubectl , that were pointing to the old cluster everything worked:
Not sure why helm provider reads those vars if following setup was used:
Using 2.0.2 provider versions that don’t have “load_config_file” argument available anymore.
I followed the instruction https://github.com/hashicorp/terraform-provider-helm/issues/400#issuecomment-583561090 provided by @eeeschwartz in this thread. It would fail for the first apply and work second time. The only thing that i missed was was adding “depends_on = [aws_eks_cluster.my_cluster]” to the data resource as mentioned in the code snippet. Once i added it started working. I created and destroyed the deployment multiple times and it worked.
data “aws_eks_cluster_auth” “cluster-auth” { // Add the depends_on name = aws_eks_cluster.my_cluster.name }