terraform-provider-kubernetes: kubernetes_manifest: 'status' attribute key is not allowed in manifest configuration

Terraform Version, Provider Version and Kubernetes Version

Terraform version: 1.0.7
Kubernetes provider version: 2.5.0
Kubernetes version: 1.21.2

Affected Resource(s)

  • kubernetes_manifest

Terraform Configuration Files

It’s a big HCL therefore, better to download from https://raw.githubusercontent.com/pixie-labs/pixie/main/k8s/operator/crd/base/px.dev_viziers.yaml and echo 'yamldecode(file("px.dev_viziers.yaml"))' | terraform console

resource "kubernetes_manifest" "newrelic-crd-viziers" {
  manifest = [content yamldecode above]
}

Debug Output

Panic Output

Steps to Reproduce

  1. terraform apply

Expected Behavior

Should apply cleanly as if you kubectl apply it works.

Actual Behavior

Error:

│ Error: Forbidden attribute key in "manifest" value
│
│   with kubernetes_manifest.newrelic-crd-viziers,
│   on helm_newrelic.tf line 94, in resource "kubernetes_manifest" "newrelic-crd-viziers":
│   94: resource "kubernetes_manifest" "newrelic-crd-viziers" {
│
│ 'status' attribute key is not allowed in manifest configuration

Important Factoids

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Reactions: 38
  • Comments: 16 (2 by maintainers)

Most upvoted comments

What a silly bug. Who asked for this forbidden-fields “feature?” How about if we just behave exactly the way kubectl does by not forbidding certain fields? 😂

I want to bump this. I know this is by design, but I think it is an issue because it seriously limits the use of kubernetes_manifest for installing CRDs. I actually opened an SO about this and discovered this issue: https://stackoverflow.com/questions/69180684/how-do-i-apply-a-crd-from-github-to-a-cluster-with-terraform/69527736#69527736

@jrhouston The problem is official cruds are published by providers and its most common to install them with kubectl directly like this: kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/application/master/deploy/kube-app-manager-aio.yaml. For this one crud this is in the official documentation: https://cloud.google.com/solutions/using-gke-applications-page-cloud-console#preparing_gke

It seems MANY official CRDs set the status field. Your asking users to copy down and manually modify an official CRD instead of being able to install it from an official source.

Maybe this warrants a new resource or something. It feels like I should be able to easily install a CRD from an official source with terraform like I can with kubectl apply -f .... As a user I’m always going to just shell out and call kubectl because it is SOO much simpler and more maintainable then keeping a local copy.

How in the heck is this open for nearly two years with one dismissive response “we don’t see any use cases where this is necessary,” people provide a plethora of use cases (which make this provider practically unusable in production if they are not supported), and it’s still open without a clear resolution?

I’m now faced with the choice of forking the kubectl provider, which works exactly how you’d expect it to and how this provider should work, because my organization will not allow a 3rd party provider that is not hashicorp or an official hashicorp partner, or, doing some crazy workaround like forking, editing, and then maintaining thousands of lines of a chart with the “forbidden” fields stripped?

Are you guys serious?

Thanks for opening this @trunet – This is actually by design. Terraform has no responsibility for setting the status of resources, and we haven’t seen any use-cases where a user would need to set a status by hand. You can simply remove the status field from this manifest, as it is unnecessary here.

Hi everyone , I’ve made a dirty workaround

locals {
  splited_yaml_map = { for file_path in fileset(path.module, "crds/${var.crd_version}/*.yaml") : file_path => yamlencode(
    { for root_key, root_values in yamldecode(file("${path.module}/${file_path}")) : root_key => root_values if root_key != "status" }
  ) }
}

resource "kubernetes_manifest" "crd" {
  for_each = local.splited_yaml_map
  manifest = yamldecode(each.value)
}

It hasn’t properly tested yet but I hope it will help somebody.

I want to bump this. I know this is by design, but I think it is an issue because it seriously limits the use of kubernetes_manifest for installing CRDs. I actually opened an SO about this and discovered this issue: https://stackoverflow.com/questions/69180684/how-do-i-apply-a-crd-from-github-to-a-cluster-with-terraform/69527736#69527736

@jrhouston The problem is official cruds are published by providers and its most common to install them with kubectl directly like this: kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/application/master/deploy/kube-app-manager-aio.yaml. For this one crud this is in the official documentation: https://cloud.google.com/solutions/using-gke-applications-page-cloud-console#preparing_gke

It seems MANY official CRDs set the status field. Your asking users to copy down and manually modify an official CRD instead of being able to install it from an official source.

Maybe this warrants a new resource or something. It feels like I should be able to easily install a CRD from an official source with terraform like I can with kubectl apply -f .... As a user I’m always going to just shell out and call kubectl because it is SOO much simpler and more maintainable then keeping a local copy.

I have the same issue, we can’t install CRUDs that include a status field

@jrhouston the design appears not to cover all the use cases. Could you please change it? Because it’s quite painful to remove the status field from dozens of crds.

@mvoitko First of all: Slava Ukraini! 💙 💛

To your observation, which design are you referring to? If you are converting your YAML manifests with our recommended tool (https://github.com/jrhouston/tfk8s), then it has a -s flag to enable stripping of server-only fields, including status. See the info here: https://github.com/jrhouston/tfk8s#usage

This will avoid the need for any hacks described above.

To re-iterate @red8888’s point: stripping the status field does not cover all use cases. There are CRs out there that should be applied including their status field in order to be valid.

Custom block devices under OpenEBS are an example. Omitting the two status fields (claimState & state) for those will result in unusable resources. You could say that’s a mistake on OpenEBS’ side, not making their operators forgiving enough. But since kubectl does allow us to set the status directly, it would be nice if kubernetes_manifest could do so as well.

For the moment I worked around this limitation with:

  1. a kubernetes_manifest to create the CR without the status;
  2. a null_resouce (that depends on the kubernetes_manifest) with a local_exec provisioner that executes kubectl patch to set the status. Perhaps this is possible with the kubectl provider as well – I haven’t bothered to check.