terraform-provider-kubectl: any good solutions for The "for_each" value depends on resource attributes that cannot be determined until apply

I guess this is a common issue and been discussed a lot:

I have this:

data "template_file" "app" {
  template = file("templates/k8s_app.yaml")

  vars = {
    db_host = module.db.this_rds_cluster_endpoint  # whatever resources to be created
  }
}

data "kubectl_file_documents" "app" {
  content = data.template_file.app.rendered
}

resource "kubectl_manifest" "app" {
  for_each = data.kubectl_file_documents.app.manifests

  yaml_body = each.value
}

I got:

Error: Invalid for_each argument
│
│   on k8s_app.tf line 36, in resource "kubectl_manifest" "app":
│   36:   for_each = data.kubectl_file_documents.app.manifests
│     ├────────────────
│     │ data.kubectl_file_documents.app.manifests is a map of string, known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target
│ argument to first apply only the resources that the for_each depends on.

Not sure if any best practices or solutions.

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Reactions: 18
  • Comments: 17

Commits related to this issue

Most upvoted comments

This is literally the recommend method for using kubectl_manifest. Is there a timeframe for fixing this bug?

Here’s a workaround I came up with:

locals {
  crds_split_doc  = split("---", file("${path.module}/crds.yaml"))
  crds_valid_yaml = [for doc in local.crds_split_doc : doc if try(yamldecode(doc).metadata.name, "") != ""]
  crds_dict       = { for doc in toset(local.crds_valid_yaml) : yamldecode(doc).metadata.name => doc }
}

resource "kubectl_manifest" "crds" {
  for_each  = local.crds_dict
  yaml_body = each.value
}

It would really help to convert/clone these data objects into resource, this would be a clean workaround.

Have the same issue with kubectl_manifest and I noticed that the error pops-up when you have more than two kubectl_manifest instances in your code. I have three, first two are working perfectly fine, when I add a third one, only that particular one fails, the first two will work as normal. Same code, like for like, just the vars are different.

The workaround I found involves using the filesetfunction to get a count of the number of files. As an example:

data "kubectl_path_documents" "proxy_docs" {
  pattern = "${path.module}/values/proxy/*.yaml"
  vars = {
    namespace = kubernetes_namespace.proxy.id
  }
}

resource "kubectl_manifest" "proxy_manifests" {
  count     = length(fileset(path.module, "/values/proxy/*.yaml"))
  yaml_body = element(data.kubectl_path_documents.proxy_docs.documents, count.index)
}

Not perfect but seems to do the trick.