terraform-provider-helm: Tiller does not install correctly
Hi there,
Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.
Terraform Version
Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.
Terraform v0.11.10
Affected Resource(s)
Please list the resources as a list, for example:
- tiller
If this issue appears to affect multiple resources, it may be an issue with Terraform’s core, so please mention this.
Terraform Configuration Files
provider "helm" {
install_tiller = true
namespace = "kube-system"
service_account = "tiller"
tiller_image = "gcr.io/kubernetes-helm/tiller:v2.11.0"
home = "./.helm"
kubernetes {
config_path = "./kubeconfig_monitoring"
}
}
Expected Behavior
Helm install works after tiller installation
Actual Behavior
Helm returns Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 127.0.0.1:8080: connect: connection refused
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform apply
Important Factoids
I created the cluster with the EKS module which created the kubernetes config. When I run helm init --service-account tiller then everything works correctly. When configured with the provider it is broken.
References
Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
- GH-1234
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 3
- Comments: 34 (5 by maintainers)
Commits related to this issue
- feat(gcp): add redis helm release - workaround for https://github.com/terraform-providers/terraform-provider-helm/issues/148 — committed to iantanwx/tf-gke by deleted user 5 years ago
@Jeeppler, there are couple possibly unecessary bits here, but I left them in anyway.
automount_service_account_token = trueshould only be necessary for helm provider version < 0.7.0tiller_image = "gcr.io/kubernetes-helm/tiller:v2.11.0"since this doesn’t seem to have been updated. Whoops. We should do that for 2.12.0/0.8.0.TL;DR: yes, the service account needs to be created first.
@jhoblitt Do I understand correctly that the main point of https://github.com/lsst-sqre/terraform-tinfoil-tiller is that it only listens on localhost?
I think the problem some people are having here is it seems like just including this block doesn’t actually install tiller because terraform doesn’t know to do anything to the remote k8 system just from a provider block.
However if you include this block it looks like tiller does get installed (although terraform never really tells you it does)
This is kinda a weird setup as I’ve never seen a terraform provider block actually install anything remotely. I think a better setup would be to remove the “install_tiller” argument and create a resource that installs tiller instead. Then if you try to use the helm provider and tiller isn’t installed just throw an error.
just my 2 cents.
Any dates when this will be fixed?
This will be fixed by #143
Just confirming that this seems to work when using Terraform 0.1.11, Helm Provider 0.7.0 and Helm 2.12.1 with the instructions provided by @Stelminator.
In case it’s useful to anyone else coming across this: I’d previously got this working by shell-ing out to
helm initand I had to manually clear some old state referring tohelm_releases usinghelm statebefore I could successfully do aterraform planusing the method described above.Has anyone got this to work - I’m running 0.7.0, but tiller is not installed if I use the following definition. Is there anything else I need to set?
It hasn’t yet been updated for helm 2.11.0, but I hacked up a simple module for installing tiller with rbac (and without tiller being accessible to other pods).
https://github.com/lsst-sqre/terraform-tinfoil-tiller
@Jeeppler or just manually set the flag…
@cwiggs It is indeed weird. The plan is to deprecate that as soon as we have a tiller resource that meets all the expectations. See https://github.com/terraform-providers/terraform-provider-helm/pull/203#issuecomment-474933490
I traced this to what seems to be the most recent related change to helm: https://github.com/helm/helm/pull/4589 so, I think @olib963 is correct that 2.11.0+ should do the trick.
Hey @shamsalmon I had a similar problem, I think the issue is that tiller isn’t automounting the token for the service account, there is a PR open that will hopefully fix this soon by upgrading to helm 2.11.
https://github.com/terraform-providers/terraform-provider-helm/issues/122 might be related. Sorry if it is a different issue.
[EDIT] Looks like the problem was due to a duplicate declaration of helm provider, in a nested module. I also separated in two separate terraform “states” (to be applied sequentially) the ServiceAccount creation from the actual helm install, this to avoid the possible issue of the install job being executed before the ServiceAccout creation.
Same issue with AKS, my configuration: azurerm v 1.22.1 kubernetes v1.5.2 helm 0.8.0
Tiller account is actually created and tiller itself does get installed but with wrong serviceaccount (default instead of tiller) and wrong version (v2.11.0 instead of specified 2.13.0).
@Stelminator Using your config, I still cannot install Tiller:
Using terraform v0.11.11
@Stelminator That’s correct. It avoids the need to configure x509 certs, network policy, etc.
I’ve updated https://github.com/lsst-sqre/terraform-tinfoil-tiller to work with
0.7.0. Eg.,I tried with 0.7.0 but it still doesn’t work. I hope I am not missing any configuration