terraform-provider-helm: 2.6.0 provider version causing `Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion`

⚠️ NOTE FROM MAINTAINERS ⚠️

v1alpha1 of the client authentication API was removed in the Kubernetes client in version 1.24. The latest release of this provider was updated to use 1.24 of the Kubernetes client Go modules, and 3.9 of the upstream Helm module. We know this seems like a breaking change but is expected as API versions marked alpha can be removed in minor releases of the Kubernetes project.

The upstream helm Go module was also updated to use the 1.24 client in helm 3.9 so you will see this issue if you use the helm command directly with a kubeconfig that tries to use the v1alpha1 client authentication API.

AWS users will need to update their config to use the v1beta1 API. Support for v1beta1 was added as default in the awscli in v1.24.0 so you may need to update your awscli package and run aws eks update-kubeconfig again.

Adding this note here as users pinning to the previous version of this provider will not see a fix to this issue the next time they update: you need to update your config to the new version and update your exec plugins. If your exec plugin still only supports v1alpha1 you need to open an issue with them to update it.


Terraform, Provider, Kubernetes and Helm Versions

Terraform version: 1.1.9
Provider version: 2.6.0
Kubernetes version: 1.21

Affected Resource(s)

  • helm_release

Terraform Configuration Files

Using module https://github.com/cloudposse/terraform-aws-helm-release

This is how we set the provider

provider "helm" {
  kubernetes {
    host                   = var.cluster_endpoint
    cluster_ca_certificate = base64decode(var.cluster_ca_cert)
    exec {
      api_version = "client.authentication.k8s.io/v1alpha1"
      args        = ["eks", "get-token", "--cluster-name", var.cluster_name]
      command     = "aws"
    }
  }
}

I tried changing the api_version to client.authentication.k8s.io/v1beta1 but then that gave me a mismatch with the expected value of client.authentication.k8s.io/v1alpha1.

Debug Output

NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.

Panic Output

Steps to Reproduce

  1. terraform apply

Expected Behavior

Terraform plans correctly

Actual Behavior

Terraform fails with this error

╷
│ Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
│
│   with module.datadog.helm_release.this[0],
│   on .terraform/modules/datadog/main.tf line 35, in resource "helm_release" "this":
│   35: resource "helm_release" "this" {
│
╵
Releasing state lock. This may take a few moments...
exit status 1

Important Factoids

Pinning the provider version to the last release 2.5.1 works

terraform {
  required_version = ">= 1.0.0"

  required_providers {
    helm = {
      source  = "hashicorp/helm"
      version = "= 2.5.1"
    }
  }
}

A fast way that we pinned our root modules using

brew install minamijoyo/tfupdate/tfupdate
tfupdate provider --version "= 2.5.1" "helm" -r .

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 62
  • Comments: 23 (4 by maintainers)

Most upvoted comments

we encounter this issue on eks.7(platformVersion) with 1.21(k8s version). I tried using the aws cliv2 but no avail. Pinning the helm provider version as suggested above works for us. It’s looks like the helm provider removed the support to "v1alpha1" as my kubeconfig still uses it.

terraform {
  required_version = ">= 1.0.0"

  required_providers {
    helm = {
      source  = "hashicorp/helm"
      version = "= 2.5.1"
    }
  }
}

@jrhouston as one who primarily works with AWS, I request that you track Kubernetes dependencies along the lines of the latest Kubernetes version EKS supports, currently 1.22. This would help to preserve compatibility between the provider and EKS clusters. (I understand if people not using EKS feel differently, but you can’t please everyone, so I’m staking my claim.)

@jrhouston how to switch to the v1beta1 version of the API ? Did it break anything with the different helm packages you had installed while doing so ?

Edit 1: ah I think I found it image

Edit 2: It works image

It looks like the v1alpha1 authentication API was removed in Kubernetes 1.24 – we upgraded to the 0.24.0 line of k8s dependencies in the latest version of this provider. It feels like a breaking change but removal of alpha APIs is expected in minor version bumps of Kubernetes.

I was able to fix this for EKS by updating the awscli package and changing the api_version in my exec block v1beta1.

The latest version of the awscli uses this version:

$ aws --version                                                                                                                           
aws-cli/2.7.8 Python/3.9.11 Darwin/20.6.0 exe/x86_64 prompt/off

$ aws eks get-token --cluster-name $NAME | jq '.apiVersion'                                                                       
"client.authentication.k8s.io/v1beta1"

cc: @jrhouston for visibility

Also I cannot get eks exec auth to work. I’m using the "data" "aws_eks_cluster_auth" instead to get the token

Be careful with this approach. It caches the auth token in the state during the plan, and if you don’t use it ‘quickly’ enough, it will expire part way through apply. We switched to the ‘exec’ plugin to avoid this.