terraform-provider-helm: `.version` field causes `Error: Provider produced inconsistent final plan`
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave “+1” or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform Version and Provider Version
Terraform v0.12.24
+ provider.azuread v0.8.0
+ provider.azurerm v2.3.0
+ provider.helm v1.1.1
+ provider.kubernetes v1.10.0
+ provider.local v1.4.0
+ provider.null v2.1.2
+ provider.ovh v0.5.0
+ provider.random v2.2.1
Provider Version
Affected Resource(s)
- helm_release
Terraform Configuration Files
###############
## Variables ##
###############
variable "cert_manager_version" {
description = "The Release Version Number of the Cert-Manager (default: v0.14.1)"
type = string
default = "0.14.1"
}
variable "cert_manager_dns_admin_email" {
description = "Email of the DNS Admin for Cert-Manager ACME ClusterIssuer (default: admin@dreamquark.com)"
type = string
default = "admin@dreamquark.com"
}
variable "cert_manager_self_check_nameserver" {
description = "The DNS Nameserver used to check FQDN for DNS validation (default: '8.8.8.8:53' - Google DNS, very reactive)"
type = string
default = "8.8.8.8:53"
}
variable "cert_manager_helm_chart_version" {
description = "Version of the Helm Chart to use for Cert-Manager (default: 0.14.1)"
type = string
default = "0.14.1"
}
#############
## Locales ##
#############
locals {
tool_cert_manager = {
namespace = "tool-cert-manager"
cluster_issuer_system = "letsencrypt-prod"
cluster_issuer_default = "letsencrypt-prod"
}
tool_cert_manager_priv = {
crd_url = "https://github.com/jetstack/cert-manager/releases/download/v${replace(var.cert_manager_version, "/^v([0-9]+\\.[0-9]+)\\.[0-9]+$/", "$1")}/cert-manager.crds.yaml"
crd_file="${local.tool_cert_manager_dir.generated}/cert-manager-${terraform.workspace}.crds.yaml"
#crd_url = "https://raw.githubusercontent.com/jetstack/cert-manager/release-${replace(var.cert_manager_version, "/^v([0-9]+\\.[0-9]+)\\.[0-9]+$/", "$1")}/deploy/manifests/00-crds.yaml"
settings = {
"global.leaderElection.namespace" = local.tool_cert_manager.namespace
"nodeSelector.agentpool" = "default"
"ingressShim.defaultIssuerName" = local.tool_cert_manager.cluster_issuer_system
"ingressShim.defaultIssuerKind" = "ClusterIssuer"
#"extraArgs[0]" = "--dns01-recursive-nameservers-only=true"
"extraArgs[0]" = "--dns01-recursive-nameservers-only"
#"image.tag" = var.cert_manager_version
#"webhook.image.tag" = var.cert_manager_version
"webhook.nodeSelector.agentpool" = "default"
#"cainjector.image.tag" = var.cert_manager_version
"cainjector.nodeSelector.agentpool" = "default"
}
provider_settings = {
azure = {
}
gcp = {
}
aws = {
# "extraArgs[1]" = "--dns01-self-check-nameservers=${var.cert_manager_self_check_nameserver}"
}
}
issuers = [
{
name = local.tool_cert_manager.cluster_issuer_system
environment = "prod"
filename = "${local.tool_cert_manager_dir.generated}/tool_cert_manager_clissuer_prod_${terraform.workspace}.yml"
}
]
azure_secret = {
name = "cert-manager-azure-credentials"
key = "CLIENT_SECRET"
}
gcp_secret = {
name = "cert-manager-gcp-credentials"
key = "key.json"
}
aws_secret = {
name = "cert-manager-aws-credentials"
key = "secret-access-key"
}
}
tool_cert_manager_dir = {
generated = local.global_dir.generated
tftemplates = local.global_dir.tftemplates
}
}
##########
## Code ##
##########
# Create the K8S Namespace because it must have specific label before installing Cert-Manager
resource "kubernetes_namespace" "tool-cert-manager_ns" {
depends_on = [
null_resource.k8s-cluster_ready,
null_resource.tool-helm_ready,
null_resource.tool-monitoring_ready,
null_resource.tool-cluster-autoscaler_ready,
null_resource.tool-external-dns_ready,
]
metadata {
annotations = {
name = local.tool_cert_manager.namespace
}
labels = {
# No more necessary with Cert-Manager v0.12 and >
# "cert-manager.io/disable-validation" = true
}
name = local.tool_cert_manager.namespace
}
}
# Create the Custom Resource Definition of Cert-Manager (must be done outside of the Helm Charts for security/RBAC reasons)
resource "null_resource" "tool-cert-manager_custom_resource_definitions" {
depends_on = [
null_resource.k8s-cluster_ready,
null_resource.tool-helm_ready,
null_resource.tool-monitoring_ready,
null_resource.tool-cluster-autoscaler_ready,
null_resource.tool-external-dns_ready,
kubernetes_namespace.tool-cert-manager_ns,
]
triggers = {
crd_url = local.tool_cert_manager_priv.crd_url
crd_file = local.tool_cert_manager_priv.crd_file
kubeconfig = local.k8s_cluster.kubeconfig_path
namespace = local.tool_cert_manager.namespace
}
provisioner "local-exec" {
command = <<EOF
bash << EOS
export NO_COLOR=\\\\e[39m
export OK_COLOR=\\\\e[32m
export ERROR_COLOR=\\\\e[31m
export WARN_COLOR=\\\\e[33m
export BOLD_TEXT=\\\\e[1m
export NOBOLD_TEXT=\\\\e[21m
wget --quiet ${local.tool_cert_manager_priv.crd_url} --output-document=${local.tool_cert_manager_priv.crd_file}
status=\$?
#echo "status: '\$status'"
if [[ "\$status" == "0" ]]
then
echo -e "[\$${OK_COLOR}INFO\$${NO_COLOR}] Download of Custom Resource Definition YAML file for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' succeeded"
else
echo -e "[\$${ERROR_COLOR}ERROR\$${NO_COLOR}] Download of Custom Resource Definition YAML file for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' failed"
exit -1
fi
sed --regexp-extended --in-place 's/namespace:.+/namespace: ${local.tool_cert_manager.namespace}/g' ${local.tool_cert_manager_priv.crd_file}
status=\$?
#echo "status: '\$status'"
if [[ "\$status" == "0" ]]
then
echo -e "[\$${OK_COLOR}INFO\$${NO_COLOR}] Rewrite N°1 Namespace of Custom Resource Definition YAML file for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' succeeded"
else
echo -e "[\$${ERROR_COLOR}ERROR\$${NO_COLOR}] Rewrite N°1 Namespace of Custom Resource Definition YAML file for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' failed"
exit -1
fi
sed --regexp-extended --in-place 's/cert-manager\//${local.tool_cert_manager.namespace}\//g' ${local.tool_cert_manager_priv.crd_file}
status=\$?
#echo "status: '\$status'"
if [[ "\$status" == "0" ]]
then
echo -e "[\$${OK_COLOR}INFO\$${NO_COLOR}] Rewrite N°2 Namespace of Custom Resource Definition YAML file for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' succeeded"
else
echo -e "[\$${ERROR_COLOR}ERROR\$${NO_COLOR}] Rewrite N°2 Namespace of Custom Resource Definition YAML file for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' failed"
exit -1
fi
kubectl --kubeconfig=${local.k8s_cluster.kubeconfig_path} apply -f ${local.tool_cert_manager_priv.crd_file}
status=\$?
#echo "status: '\$status'"
if [[ "\$status" == "0" ]]
then
echo -e "[\$${OK_COLOR}INFO\$${NO_COLOR}] Installation of Custom Resource Definition for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' succeeded"
else
echo -e "[\$${ERROR_COLOR}ERROR\$${NO_COLOR}] Installation of Custom Resource Definition for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' failed"
exit -1
fi
\rm -f ${local.tool_cert_manager_priv.crd_file}
EOS
EOF
}
provisioner "local-exec" {
when = destroy
command = <<EOF
bash << EOS
export NO_COLOR=\\\\e[39m
export OK_COLOR=\\\\e[32m
export ERROR_COLOR=\\\\e[31m
export WARN_COLOR=\\\\e[33m
export BOLD_TEXT=\\\\e[1m
export NOBOLD_TEXT=\\\\e[21m
wget --quiet ${self.triggers.crd_url} --output-document=${self.triggers.crd_file}
status=\$?
#echo "status: '\$status'"
if [[ "\$status" == "0" ]]
then
echo -e "[\$${OK_COLOR}INFO\$${NO_COLOR}] Download of Custom Resource Definition YAML file for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' succeeded"
else
echo -e "[\$${ERROR_COLOR}ERROR\$${NO_COLOR}] Download of Custom Resource Definition YAML file for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' failed"
exit -1
fi
sed --regexp-extended --in-place 's/namespace:.+/namespace: ${self.triggers.namespace}/g' ${self.triggers.crd_file}
status=\$?
#echo "status: '\$status'"
if [[ "\$status" == "0" ]]
then
echo -e "[\$${OK_COLOR}INFO\$${NO_COLOR}] Rewrite N°1 Namespace of Custom Resource Definition YAML file for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' succeeded"
else
echo -e "[\$${ERROR_COLOR}ERROR\$${NO_COLOR}] Rewrite N°1 Namespace of Custom Resource Definition YAML file for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' failed"
exit -1
fi
sed --regexp-extended --in-place 's/cert-manager\//${self.triggers.namespace}\//g' ${self.triggers.crd_file}
status=\$?
#echo "status: '\$status'"
if [[ "\$status" == "0" ]]
then
echo -e "[\$${OK_COLOR}INFO\$${NO_COLOR}] Rewrite N°2 Namespace of Custom Resource Definition YAML file for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' succeeded"
else
echo -e "[\$${ERROR_COLOR}ERROR\$${NO_COLOR}] Rewrite N°2 Namespace of Custom Resource Definition YAML file for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' failed"
exit -1
fi
kubectl --kubeconfig=${self.triggers.kubeconfig} delete -f ${self.triggers.crd_file}
status=\$?
#echo "status: '\$status'"
if [[ "\$status" == "0" ]]
then
echo -e "[\$${OK_COLOR}INFO\$${NO_COLOR}] Removing of Custom Resource Definition for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' succeeded"
else
echo -e "[\$${ERROR_COLOR}ERROR\$${NO_COLOR}] Removing of Custom Resource Definition for '\$${BOLD_TEXT}tool-cert-manager\$${NOBOLD_TEXT}' failed"
fi
\rm -f ${self.triggers.crd_file}
EOS
EOF
}
}
# Install Cert Manager with Helm Chart
resource "helm_release" "tool-cert-manager_package" {
name = "cert-manager"
repository = local.tool_helm.repository.jetstack
chart = "jetstack/cert-manager"
namespace = local.tool_cert_manager.namespace
timeout = 900 # in sec, 15 minutes
version = var.cert_manager_helm_chart_version
depends_on = [
# ...
kubernetes_namespace.tool-cert-manager_ns,
null_resource.tool-cert-manager_custom_resource_definitions,
]
dynamic "set" {
for_each = local.tool_cert_manager_priv.settings
iterator = setting
content {
name = setting.key
value = setting.value
}
}
dynamic "set" {
for_each = local.tool_cert_manager_priv.provider_settings[local.k8s_cluster.provider]
iterator = setting
content {
name = setting.key
value = setting.value
}
}
}
Debug Output
I do not have Debug Output currently, because this bug only happened on the first time of the following sequence:
- Create Kubernetes Cluster via Terraform (on different Cloud Provider - AWS, GCP and Azure)
- Install some “tools” before Cert-Manager (Prometheus-Operator and External-DNS), every time via Terraform and Helm Provider
- Install Cert-Manager via Terraform and Helm Provider.
If I start another Terraform apply, the helm_release of Cert-Manager goes well, normally, but every time, it happens only the first time I play the whole “sequence”.
I’ll try to obtain the Debug Output.
Panic Output
kubernetes_namespace.tool-cert-manager_ns: Creating...
kubernetes_namespace.tool-cert-manager_ns: Creation complete after 0s [id=tool-cert-manager]
null_resource.tool-cert-manager_custom_resource_definitions: Creating...
null_resource.tool-cert-manager_custom_resource_definitions: Provisioning with 'local-exec'...
null_resource.tool-cert-manager_custom_resource_definitions (local-exec): Executing: ["/bin/sh" "-c" "bash << EOS\nexport NO_COLOR=\\\\\\\\e[39m\nexport OK_COLOR=\\\\\\\\e[32m\nexport ERROR_COLOR=\\\\\\\\e[31m\nexport WARN_COLOR=\\\\\\\\e[33m\nexport BOLD_TEXT=\\\\\\\\e[1m\nexport NOBOLD_TEXT=\\\\\\\\e[21m\n\nwget --quiet https://github.com/jetstack/cert-manager/releases/download/v0.14.1/cert-manager.crds.yaml --output-document=./generated/cert-manager-dev-tech.crds.yaml\nstatus=\\$?\n#echo \"status: '\\$status'\"\nif [[ \"\\$status\" == \"0\" ]]\nthen\n echo -e \"[\\${OK_COLOR}INFO\\${NO_COLOR}] Download of Custom Resource Definition YAML file for '\\${BOLD_TEXT}tool-cert-manager\\${NOBOLD_TEXT}' succeeded\"\nelse\n echo -e \"[\\${ERROR_COLOR}ERROR\\${NO_COLOR}] Download of Custom Resource Definition YAML file for '\\${BOLD_TEXT}tool-cert-manager\\${NOBOLD_TEXT}' failed\"\n exit -1\nfi\nsed --regexp-extended --in-place 's/namespace:.+/namespace: tool-cert-manager/g' ./generated/cert-manager-dev-tech.crds.yaml\nstatus=\\$?\n#echo \"status: '\\$status'\"\nif [[ \"\\$status\" == \"0\" ]]\nthen\n echo -e \"[\\${OK_COLOR}INFO\\${NO_COLOR}] Rewrite N°1 Namespace of Custom Resource Definition YAML file for '\\${BOLD_TEXT}tool-cert-manager\\${NOBOLD_TEXT}' succeeded\"\nelse\n echo -e \"[\\${ERROR_COLOR}ERROR\\${NO_COLOR}] Rewrite N°1 Namespace of Custom Resource Definition YAML file for '\\${BOLD_TEXT}tool-cert-manager\\${NOBOLD_TEXT}' failed\"\n exit -1\nfi\nsed --regexp-extended --in-place 's/cert-manager\\//tool-cert-manager\\//g' ./generated/cert-manager-dev-tech.crds.yaml\nstatus=\\$?\n#echo \"status: '\\$status'\"\nif [[ \"\\$status\" == \"0\" ]]\nthen\n echo -e \"[\\${OK_COLOR}INFO\\${NO_COLOR}] Rewrite N°2 Namespace of Custom Resource Definition YAML file for '\\${BOLD_TEXT}tool-cert-manager\\${NOBOLD_TEXT}' succeeded\"\nelse\n echo -e \"[\\${ERROR_COLOR}ERROR\\${NO_COLOR}] Rewrite N°2 Namespace of Custom Resource Definition YAML file for '\\${BOLD_TEXT}tool-cert-manager\\${NOBOLD_TEXT}' failed\"\n exit -1\nfi\nkubectl --kubeconfig=./generated/kubeconfig-dev-tech apply -f ./generated/cert-manager-dev-tech.crds.yaml\nstatus=\\$?\n#echo \"status: '\\$status'\"\nif [[ \"\\$status\" == \"0\" ]]\nthen\n echo -e \"[\\${OK_COLOR}INFO\\${NO_COLOR}] Installation of Custom Resource Definition for '\\${BOLD_TEXT}tool-cert-manager\\${NOBOLD_TEXT}' succeeded\"\nelse\n echo -e \"[\\${ERROR_COLOR}ERROR\\${NO_COLOR}] Installation of Custom Resource Definition for '\\${BOLD_TEXT}tool-cert-manager\\${NOBOLD_TEXT}' failed\"\n exit -1\nfi\n\\rm -f ./generated/cert-manager-dev-tech.crds.yaml\nEOS\n"]
null_resource.tool-cert-manager_custom_resource_definitions (local-exec): [INFO] Download of Custom Resource Definition YAML file for 'tool-cert-manager' succeeded
null_resource.tool-cert-manager_custom_resource_definitions (local-exec): [INFO] Rewrite N°1 Namespace of Custom Resource Definition YAML file for 'tool-cert-manager' succeeded
null_resource.tool-cert-manager_custom_resource_definitions (local-exec): [INFO] Rewrite N°2 Namespace of Custom Resource Definition YAML file for 'tool-cert-manager' succeeded
null_resource.tool-cert-manager_custom_resource_definitions (local-exec): customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
null_resource.tool-cert-manager_custom_resource_definitions (local-exec): customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
null_resource.tool-cert-manager_custom_resource_definitions (local-exec): customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
null_resource.tool-cert-manager_custom_resource_definitions (local-exec): customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
null_resource.tool-cert-manager_custom_resource_definitions (local-exec): customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
null_resource.tool-cert-manager_custom_resource_definitions (local-exec): customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
null_resource.tool-cert-manager_custom_resource_definitions (local-exec): [INFO] Installation of Custom Resource Definition for 'tool-cert-manager' succeeded
null_resource.tool-cert-manager_custom_resource_definitions: Creation complete after 3s [id=7666740545687665524]
Error: Provider produced inconsistent final plan
When expanding the plan for helm_release.tool-cert-manager_package to include
new values learned so far during apply, provider
"registry.terraform.io/-/helm" produced an invalid new value for .version: was
cty.StringVal("0.14.1"), but now cty.StringVal("v0.14.1").
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
Expected Behavior
No bug on Terraform apply for the whole “sequence”.
Actual Behavior
Terraform Helm Provider crashes on the first time of the “sequence”, but not if I do a Terraform apply a second time.
Steps to Reproduce
terraform applyfor the whole sequence, only the first time.
If I want to reproduce this bug, I have to do a terraform destroy of the whole K8S Cluster, including the Prometheus-Operator and External-DNS tools, and make another terraform apply.
Important Factoids
It seems only happening on the Cert-Manager tool, using every Helm Release since 0.9.0 I have tested.
I currently uses the Terraform Helm Provider through Helm v3 but it happens also on the previous Terraform Helm Provider version that was supporting Helm v2.
References
About this issue
- Original URL
- State: open
- Created 4 years ago
- Reactions: 87
- Comments: 31 (2 by maintainers)
I’ve been getting this with many different helm charts, aws node termination handler, aws efs csi driver, drone-server ect…
I am confirming the bug exists, but also it has nothing to do with
dynamicblock, becayse we don’t use one and have encountered the same problem:Our helm provider is set up with variables straight from aws_eks_cluster resource:
And the helm release is not that complicated:
in module:
We use 2.0.2 helm provider version.
We saw it with manifest experiment enabled. After disabling the manifest experiment, it worked as expected.
Any updates on this issue ? I am still facing it for helm provider version v2.6.0
I too got this issue on installing Grafana. This should be taken on priories by the team. +1
I’ve added
type=stringas @arrrght suggests but I still get the error 40% of the time:errors like
fix by specifying type on each parameter, like
I can confirm when you second run, It does run without issues
Thanks for replying @nitrocode! It seems like this doesn’t relate to the original issue involving
.versionattribute. You can create a new issue that focuses more on what you’re facing when usingexperiments = trueI could be wrong but I think this (mostly?) appears when enabling the manifest experiments… which is really a must-have to know for certain what you are deploying.
still be problem on 2.7.0 (đź‘€")
Still seeing this issue on 2.4.1, with
type = "string"set on all properties. intermittently, not 40% of the time like mentionened above, but about 20%, which is still annoying…The probability for running into the issue seems to scale with the number of
setblocks (= more sets, higher failure rate), but that could just be a feeling on my part. Additionally, in all failed cases we are usingdynamic "set"blocks, so it could be that the issue was indeed remedied for fixed set blocks, and a totally different issue affects dynamic blocks. However I have to say that we have dynamic sets in all our more complex charts - so I cannot say whether if it’s the dynamic sets or the overall argument count that triggers the error).