terraform-provider-helm: Configuring provider using another resource's outputs is not possible anymore since v2.0.0

Terraform, Provider, Kubernetes and Helm Versions

Terraform version: 0.14.3
Provider version: 2.0.0
Kubernetes version: 1.8.x
Helm version: 3.4.x

Affected Resource(s)

  • helm_release
  • helm_repository
Error: provider not configured: you must configure a path to your kubeconfig
or explicitly supply credentials via the provider block or environment variables.

See our documentation at: https://registry.terraform.io/providers/hashicorp/helm/latest/docs#authentication

Expected Behavior

With provider < 2.x I used to create an AKS or EKS cluster and deploy some helm charts in the same terraform workspace, configuring Helm provider with credentials coming from Azure or AWS’ specific resources to create the Kubernetes cluster. It looks like this is not possible anymore.

Actual Behavior

Important Factoids

It used to with with v1.x

References

  • GH-1234

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 26
  • Comments: 39 (15 by maintainers)

Most upvoted comments

@FischlerA Absolutely – I can see that in your use case the problem of picking up the wrong cluster by mistake couldn’t happen because your apply runs in a completely isolated environment, which makes sense and is a good practice I like to see.

We did some user research on this, where we talked to a sample of users across the helm and k8s providers, and uncertainty and confusion between the default config path, KUBECONFIG and load_config_file was one of the key findings. We had a bunch of debates internally about this and decided that the best way to provide the most consistent experience is to err on the side of explicitness. I definitely appreciate that on the happy path where your pipeline is so neatly isolated that having to set the path to the kubeconfig feels like an extra detail.

If you feel strongly about this change please feel free to open a new issue to advocate for reversing it and we can have more discussion there.

Thanks for contributing @FischlerA! 😄

For all that still have issues with v2.0.1, did you also upgrade to Terraform v0.14.x meanwhile? I don’t have issue anymore with v2.0.1 on Terraform v0.13.x, but I have one on Terraform v0.14.x (issue #652).

I’ve put out a patch release that moves the validation logic so it happens after the provider is configured and should remedy this issue. Please let me know if this error is still surfacing for you.

Same error with the following:

terraform {
  required_version = ">= 0.14.3"
  required_providers {
    helm = {
      version = ">=1.3.2"
    }
    null = {
      version = ">=3.0.0"
    }
  }
}
provider "helm" {
  kubernetes {
    host                   = lookup(var.k8s_cluster, "host")
    client_certificate     = base64decode(lookup(var.k8s_cluster, "client_certificate"))
    client_key             = base64decode(lookup(var.k8s_cluster, "client_key"))
    cluster_ca_certificate = base64decode(lookup(var.k8s_cluster, "cluster_ca_certificate"))

  }
}

Works fine with the following and hence an issue with 2.0.0

terraform {
  required_version = ">= 0.14.3"
  required_providers {
    helm = {
      version = "=1.3.2"
    }
    null = {
      version = ">=3.0.0"
    }
  }
}
provider "helm" {
  kubernetes {
    host                   = lookup(var.k8s_cluster, "host")
    client_certificate     = base64decode(lookup(var.k8s_cluster, "client_certificate"))
    client_key             = base64decode(lookup(var.k8s_cluster, "client_key"))
    cluster_ca_certificate = base64decode(lookup(var.k8s_cluster, "cluster_ca_certificate"))

  }
}

Everyone reporting this, we have a small favor to ask 🥰

We’ve been making changes to credentials handling in both this provider as well as the Kubernetes provider. At the same time, Terraform itself is moving quite fast recently and churning out new releases with significant changes. This really increases the potential for corner cases not being caught in our testing.

I would like to ask a favor of all who reported here seeing this issue if you could please also test with a build of the Kubernetes provider in the same scenario / environment. So far we’re not having much luck reproducing these issues on our side, which suggests they might be caused by particularities in real-life environments we might have not foreseen in our tests.

So please, if you have spare cycles and are seeing issues similar to what’s reported here, do us a favor and test with a build from master of the K8s provider as well. We’re close to a major release there and want to make sure this kind of issue is not carried over there.

Thanks a lot!

I’m still having the same issue with the last patch 2.0.1

Error: provider not configured: you must configure a path to your kubeconfig
or explicitly supply credentials via the provider block or environment variables.

The problem is that I can’t provide any kube config file as the cluster is not created yet, here is my conf :

provider "helm" {
  kubernetes {
    host                   = aws_eks_cluster.eks-cluster.endpoint
    cluster_ca_certificate = base64decode(aws_eks_cluster.eks-cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster-auth.token
  }
}

We have a similar issue on v2.0.2: Error: query: failed to query with labels: secrets is forbidden: User "system:anonymous" cannot list resource "secrets" in API group "" in the namespace "test-monitoring" terraform plan fails with the error above. Downgrading the helm provider version to 1.3.2 resolves the issue for us. Both kubernetes and helm providers are initialized:

provider "kubernetes" {
  host                   = aws_eks_cluster.main.endpoint
  token                  = data.aws_eks_cluster_auth.main.token
  cluster_ca_certificate = base64decode(aws_eks_cluster.main.certificate_authority.0.data)
  load_config_file       = false
}

provider "helm" {
  kubernetes {
    host                   = aws_eks_cluster.main.endpoint
    token                  = data.aws_eks_cluster_auth.main.token
    cluster_ca_certificate = base64decode(aws_eks_cluster.main.certificate_authority.0.data)
  }
}

It seems that the helm provider does not pick up the kubernetes authentication block properly.

Works fine with the following provider.tf for modules

terraform {
  required_version = ">= 0.14.4"
  required_providers {
    helm = {
      source  = "hashicorp/helm"
      version = ">=2.0.1"
    }
    null = {
      source  = "hashicorp/null"
      version = ">=3.0.0"
    }
    kubernetes-alpha = {
      source  = "hashicorp/kubernetes-alpha"
      version = "0.2.1"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "1.13.3"
    }
  }
}
provider "helm" {
  kubernetes {
    host                   = lookup(var.k8s_cluster, "host")
    client_certificate     = base64decode(lookup(var.k8s_cluster, "client_certificate"))
    client_key             = base64decode(lookup(var.k8s_cluster, "client_key"))
    cluster_ca_certificate = base64decode(lookup(var.k8s_cluster, "cluster_ca_certificate"))

  }
}
provider "kubernetes-alpha" {
  host                   = lookup(var.k8s_cluster, "host")
  client_certificate     = base64decode(lookup(var.k8s_cluster, "client_certificate"))
  client_key             = base64decode(lookup(var.k8s_cluster, "client_key"))
  cluster_ca_certificate = base64decode(lookup(var.k8s_cluster, "cluster_ca_certificate"))
}

provider "kubernetes" {
  host                   = lookup(var.k8s_cluster, "host")
  client_certificate     = base64decode(lookup(var.k8s_cluster, "client_certificate"))
  client_key             = base64decode(lookup(var.k8s_cluster, "client_key"))
  cluster_ca_certificate = base64decode(lookup(var.k8s_cluster, "cluster_ca_certificate"))
}

Here is how to reproduce this issue:

  1. Create a Terraform definition with an AKS cluster and a Helm chart deployed to that cluster (here is an example https://gist.github.com/derkoe/bbf4036033a322846edda33c123af092)
  2. Run terraform apply
  3. Change the params of the cluster so that it has to be re-created (in the example change the vm_size)
  4. Run terraform plan (or apply) and you’ll get the error:
    Error: provider not configured: you must configure a path to your kubeconfig
    or explicitly supply credentials via the provider block or environment variables.
    

I guess the same will also apply for other clusters like GKE or EKS.

@AndreaGiardini could you file a separate issue with all your info please? This seems unrelated to this particular issue.

also load_config_file parameter was deleted…

@jrhouston Hi there, i’m sorry but for me the issue isn’t fixed, as the provider no longer looks for the kubeconfig file in the default path “~/.kube/config” as the old version did. So far i relied on not having to specify the default path for the config file. Any chance this will be readded?