terraform-provider-helm: helm charts failing deployment via terraform, working when direct vai helm cli

Community Note

  • Please vote on this issue by adding a šŸ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave ā€œ+1ā€ or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version and Provider Version

āÆ terraform version 
Terraform v0.12.24
+ provider.azurerm v2.5.0
+ provider.null v2.1.2
+ provider.random v2.2.1
+ provider.tls v2.1.1

Provider Version

Affected Resource(s)

  • helm_release

I am seeing this behavior across a few charts, but its a bit random.

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

resource in question;

resource "helm_release" "mongodb-sharded" {
  name      = "mongodb-sharded"
  chart     = "mongodb-sharded"
  repository = "https://charts.bitnami.com/bitnami"
  timeout = 600
}

Debug Output

https://gist.github.com/lukekhamilton/8e52b1e403a89557062796a4d25af24d

Panic Output

Expected Behavior

When I run the terraform apply I expect it to install the helm chart.

Actual Behavior

When I run the terraform apply this helm chart and others arent installing and getting stuck. However when I delete the release and then install manually it works without issue.

āÆ helm ls -a   
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
mongodb-sharded default         1               2020-04-16 02:46:09.166067202 +0000 UTC pending-install mongodb-sharded-1.1.2   4.2.5   

Steps to Reproduce

  1. terraform apply

Important Factoids

I am running this on a very standard AKS clusters cluster.

References

  • GH-1234

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 58
  • Comments: 24 (4 by maintainers)

Commits related to this issue

Most upvoted comments

I’m experiencing a similar issue, but with Bitnami’s nginx, as follows:

# Using helm provider ~> 1.1.1

data "helm_repository" "bitnami" {
  name = "bitnami"
  url  = "https://charts.bitnami.com/bitnami"
}

resource "helm_release" "nginx" {
  name       = "my-nginx"
  repository = data.helm_repository.bitnami.metadata[0].name
  chart      = "nginx"
  version    = "5.2.3"

  namespace = "default"

  timeout = 50
}

It keeps printing lines such as: helm_release.nginx: Still creating... [50s elapsed] until hitting the timeout.

Once it hits the timeout, is when the deployment entry appears on helm:

$ helm ls --namespace my-namespace
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS  CHART           APP VERSION
my-nginx        default   1               2020-04-22 16:16:25.7100555 +0100 BST   failed  nginx-5.2.3     1.17.10

marked as failed.

However, the pods are up, ready and running, and the services have been created and work.

I’m sharing it here because I believe it may be related (seems to relate with the perception of pod readiness) and it’s also a relatively quick scenario to spin up/down as necessary.

Also, a workaround for this use case is to add wait = false to the helm_release.

Experiencing this with the prometheus operator (as does this issue potentially https://github.com/helm/charts/issues/21913).

Same symptoms as above. Everything in kubectl is running, helm chart stuck on ā€˜pending-install’. As soon as terraform times out, helm chart goes to ā€˜failed’. Installing with helm manually works no problem.

I have a similar problem on GKE

My helm_release has a high timeout of 3000, and sometimes its fails with message: Kubernetes Cluster Unreachable BUT most of the times that helm_release would be deployed in the backend.

Helm provider v2.4.1

I experienced this now with hashicorp vault chart. Vault gets installed, and pods are Running (though in error state, as expected because vault is initially sealed). The helm provider times out in 5 minutes even though the helm cli run manually completes without an error after 10-20 seconds.

export TF_LOG=trace showed me hints.

The Terraform helm provider appears to wait for Kubernetes to process everything in the chart, while the cli helm does not. In my case a load balancer was failing to start.

One thing that changed the behavior for me was to up the size of the VM for the node pools then it worked without issue. However, for the life of me, I can’t find any outputted loges anywhere to help debug what is actually happening…