terraform-provider-helm: helm charts failing deployment via terraform, working when direct vai helm cli
Community Note
- Please vote on this issue by adding a š reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave ā+1ā or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform Version and Provider Version
⯠terraform version
Terraform v0.12.24
+ provider.azurerm v2.5.0
+ provider.null v2.1.2
+ provider.random v2.2.1
+ provider.tls v2.1.1
Provider Version
Affected Resource(s)
- helm_release
I am seeing this behavior across a few charts, but its a bit random.
Terraform Configuration Files
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
resource in question;
resource "helm_release" "mongodb-sharded" {
name = "mongodb-sharded"
chart = "mongodb-sharded"
repository = "https://charts.bitnami.com/bitnami"
timeout = 600
}
Debug Output
https://gist.github.com/lukekhamilton/8e52b1e403a89557062796a4d25af24d
Panic Output
Expected Behavior
When I run the terraform apply I expect it to install the helm chart.
Actual Behavior
When I run the terraform apply this helm chart and others arent installing and getting stuck. However when I delete the release and then install manually it works without issue.
⯠helm ls -a
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
mongodb-sharded default 1 2020-04-16 02:46:09.166067202 +0000 UTC pending-install mongodb-sharded-1.1.2 4.2.5
Steps to Reproduce
terraform apply
Important Factoids
I am running this on a very standard AKS clusters cluster.
References
- GH-1234
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 58
- Comments: 24 (4 by maintainers)
Commits related to this issue
- Change gateway atomicity Change atomicity to false in light of https://github.com/hashicorp/terraform-provider-helm/issues/467 — committed to CelsoSantos/k8s-labs-terraform-modules by CelsoSantos 4 years ago
Iām experiencing a similar issue, but with Bitnamiās nginx, as follows:
It keeps printing lines such as:
helm_release.nginx: Still creating... [50s elapsed]until hitting the timeout.Once it hits the timeout, is when the deployment entry appears on helm:
marked as
failed.However, the pods are up, ready and running, and the services have been created and work.
Iām sharing it here because I believe it may be related (seems to relate with the perception of pod readiness) and itās also a relatively quick scenario to spin up/down as necessary.
Also, a workaround for this use case is to add
wait = falseto thehelm_release.Experiencing this with the prometheus operator (as does this issue potentially https://github.com/helm/charts/issues/21913).
Same symptoms as above. Everything in kubectl is running, helm chart stuck on āpending-installā. As soon as terraform times out, helm chart goes to āfailedā. Installing with helm manually works no problem.
I have a similar problem on GKE
My helm_release has a high timeout of 3000, and sometimes its fails with message:
Kubernetes Cluster UnreachableBUT most of the times that helm_release would be deployed in the backend.Helm provider v2.4.1
I experienced this now with hashicorp vault chart. Vault gets installed, and pods are Running (though in error state, as expected because vault is initially sealed). The helm provider times out in 5 minutes even though the helm cli run manually completes without an error after 10-20 seconds.
export TF_LOG=traceshowed me hints.The Terraform helm provider appears to wait for Kubernetes to process everything in the chart, while the cli helm does not. In my case a load balancer was failing to start.
One thing that changed the behavior for me was to up the size of the VM for the node pools then it worked without issue. However, for the life of me, I canāt find any outputted loges anywhere to help debug what is actually happeningā¦