terraform-provider-kubectl: any good solutions for The "for_each" value depends on resource attributes that cannot be determined until apply
I guess this is a common issue and been discussed a lot:
I have this:
data "template_file" "app" {
template = file("templates/k8s_app.yaml")
vars = {
db_host = module.db.this_rds_cluster_endpoint # whatever resources to be created
}
}
data "kubectl_file_documents" "app" {
content = data.template_file.app.rendered
}
resource "kubectl_manifest" "app" {
for_each = data.kubectl_file_documents.app.manifests
yaml_body = each.value
}
I got:
Error: Invalid for_each argument
│
│ on k8s_app.tf line 36, in resource "kubectl_manifest" "app":
│ 36: for_each = data.kubectl_file_documents.app.manifests
│ ├────────────────
│ │ data.kubectl_file_documents.app.manifests is a map of string, known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target
│ argument to first apply only the resources that the for_each depends on.
Not sure if any best practices or solutions.
About this issue
- Original URL
- State: open
- Created 3 years ago
- Reactions: 18
- Comments: 17
This is literally the recommend method for using kubectl_manifest. Is there a timeframe for fixing this bug?
Here’s a workaround I came up with:
It would really help to convert/clone these
data
objects intoresource
, this would be a clean workaround.Have the same issue with
kubectl_manifest
and I noticed that the error pops-up when you have more than twokubectl_manifest
instances in your code. I have three, first two are working perfectly fine, when I add a third one, only that particular one fails, the first two will work as normal. Same code, like for like, just the vars are different.The workaround I found involves using the
fileset
function to get a count of the number of files. As an example:Not perfect but seems to do the trick.