terraform-provider-kubernetes: kubernetes namespace already created fails

Terraform Version, Provider Version and Kubernetes Version

Terraform version:
Terraform v0.15.5
on windows_amd64
+ provider registry.terraform.io/cloudflare/cloudflare v2.21.0
+ provider registry.terraform.io/hashicorp/azuread v1.5.1
+ provider registry.terraform.io/hashicorp/azurerm v2.62.1
+ provider registry.terraform.io/hashicorp/helm v2.2.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.3.2
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/terraform-providers/azuredevops v0.1.5

Affected Resource(s)

kubernetes_namespace

Terraform Configuration Files

resource “kubernetes_namespace” “kubnss” { for_each = toset(var.namespaces) metadata { name = each.key } }

Expected Behavior

If the namespace is already created, it should just omit the statement and move on to the next one

Actual Behavior

Error: namespaces “xxxx-web-xxxxx” already exists │ │ with module.production.module.kubernetes_web.kubernetes_namespace.kubnss[“myns”], │ on m/kubernetes/main.tf line 69, in resource “kubernetes_namespace” “kubnss”: │ 69: resource “kubernetes_namespace” “kubnss” {

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Reactions: 24
  • Comments: 36

Most upvoted comments

However, Terraform whole idea is to be declarative, if the object exist and no diff is made, then it doesn’t make sense to recreate it (hence no error). Every other object in TF works like this, why would a k8s namespace differ from it?. Even the k8s cluster work like that. It is this specific object that is not.

@oferchen I don’t agree with you, Terraform is a declarative language & it is supposed to ignore a resource that already exists in a state that is wanted. So an error because a resource exists is against Terraform.

an option should be there to bypass the namespace creation if found already existent. the behaviour of failing on already-exist namespace renders then entire terraform template non-declarative.

Good day @katlimruiz,

I do not think this is a bug.

You asked terraform to create a resource and it failed because it was already created. The common practice is to import that already created resource into terraform state so that terraform can manage it.

Thanks!

I came across this issue as well, applied the following workaround. IMO this is not a bug and I also don’t think a feature is justified for something that can be worked around.

data "kubernetes_all_namespaces" "allns" {}

resource "kubernetes_namespace" "this" {
  for_each = toset([ for k in var.namespaces : k if !contains(keys(data.kubernetes_all_namespaces.allns), k) ])
  metadata {
    name = each.key
  }
  depends_on = [data.kubernetes_all_namespaces.allns] # potentially more if you want to refresh list of NS
}

if they are in the state and they are already there, then yes, all terraform scripts work like that otherwise this would just never work.

This is my experience. When you already have a whole platform in place, and you want to use terraform, then yes, it is a pain in the b*tt because it tells you to import a lot of resources, and importing them is very slow and problematic.

When you create a platform from scratch in terraform, then things go much much easier. There are some objects like Kubernetes cluster that do not communicate their full creation to the cloud provider therefore (even documentation says) you have to separate the creation from the apply of more kub resources, otherwise it would not work.

The only times I’ve seen the state broken is when 1) you use git to store the state 2) you made changes to the cloud manually and therefore discrepancies occur.

Regarding the broader discussion, I feel that tools should focus in allowing people to do what they need to do, and in some of the points above, it looks like the need perform this simple operation is seen as less important than the purity of terraform perspective on things, but without providing a simple workaround to perform this simple but common operation.

In the points above, just would like to point out that kubernetes_namespace already implements the “kubectl-as-command” logic (creating 2x the same ns fails). However, kubernetes_manifest also implements the same logic (while the kubectl apply of the same manifest twice succeeds).

In our case, we need to “create a namespace if one doesn’t yet exist”, as we are deploying terraform resources and gitops/flux resources, and sometimes some are deployed before others. After spending a non-trivial amount of time, barring from using “kubectl” to daploy YAML resource of namespace, we could not find how to do this in a simple manner in pure terraform.

So, using kubectl behaviour as the spec for this module, a possible solution for this would be to:

  • kubernetes_namespace implement the imperative approach of kubectl create ns
  • kubernetes_manifest implement the declarative approach of kubectl apply

We provision the GKE cluster in the root module and deploy our app using a child module. We’d like to use the exact same module for all pre-production environments. Changing any variable in the child module (e.g. changing the namespace from develop to staging) triggers a destroy of the other environment (by design).

If you want to keep develop around and add staging alongside it, you wouldn’t change the namespace from develop to staging on your existing module block (which means develop no longer exists in your config and will be removed upon apply).

Instead, you would add a second module block (referencing the same source) with the different variable. Both module instances can use the same provider(s).

Or, as you’ve found, you can use workspaces to have multiple separate states in the same backend. This is more appropriate when you want your whole configuration to be duplicated N times.

That’s an unfortunate side-effect of kubectl provider calling kubectl apply internally which doesn’t differentiate between create and modify. It’s not consistent with TF’s provider contract (which is followed by the vast majority of providers; try creating a resource that already exists with any of the major TF providers and you’ll see), and causes problems if people didn’t intend to overwrite existing objects with the same kind/namespace/name. The author of the kubectl provider considered fixing this behaviour in https://github.com/gavinbunney/terraform-provider-kubectl/issues/73 (by changing to use kubectl create when TF asks the provider to create a resource); it seems they didn’t get around to it but hopefully it will be fixed in future.

TF’s view of the world is the state. If the object doesn’t exist in the state, the provider is asked to create it, and that operation is supposed to fail if the object already exists in the “real world”. This is how almost every provider works, and it’s the reason why the import operation exists. I’m curious what you think terraform import is for, if you think that the incorrect behaviour of the kubectl provider is “properly working”.

Are you positive that all terraform resources are ignored if they already exist?

I ask this because there have been dozens of times where we have broken the terraform state and I was forced to import buckets, databases, kubernetes services, etc before an apply worked correctly again…