tf-controller: The runner is not picking the correct Terraform state

I have a terraform controller for Flux running with a Github provider, however, it seems to be picking up the wrong Terraform state, so it keeps trying to recreate the resources again and again (and fails because they already exist)

This is how it is configured

apiVersion: infra.contrib.fluxcd.io/v1alpha1
kind: Terraform
metadata:
  name: saas-github
  namespace: flux-system
spec:
  interval: 2h
  approvePlan: "auto"
  workspace: "prod"
  backendConfig:
    customConfiguration: |
      backend "s3" {
        bucket                      = "my-bucket"
        key                         = "my-key"
        region                      = "eu-west-1"
        dynamodb_table              = "state-lock"
        role_arn                    = "arn:aws:iam::11111:role/my-role"
        encrypt                     = true
      }
  path: ./terraform/saas/github
  runnerPodTemplate:
    metadata:
      annotations:
        iam.amazonaws.com/role: pod-role
  sourceRef:
    kind: GitRepository
    name: infrastructure
    namespace: flux-system

locally running terraform init with a state.config file that has a similar/same configuration it works fine and it detect the current state properly:

bucket = “my-bucket” key = “infrastructure-github” region = “eu-west-1” dynamodb_table = “state-lock” role_arn = “arn:aws:iam::111111:role/my-role” encrypt = true

Reading the documentation I saw also a configPath that could be used, so I tried to point it to the state file, but then I got the error: Failed to initialize kubernetes configuration: error loading config file couldn't get version/kind; json parse error

Which is weird, like it tries to load Kuberntes configuration, not Terraform, or at least it expects a json file, which is not the case of my state configuration

I’m running Terraform 1.3.1 on both locally and on the tf runner pod

On the runner pod I can see the generated_backend_config.tf and it is the same configuration and .terraform/terraform.tfstate also points to the bucket

The only suspicious thing on the logs that I could find is this:

- Finding latest version of hashicorp/github...
- Finding integrations/github versions matching "~> 4.0"...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/github v5.9.1...
- Installed hashicorp/github v5.9.1 (signed by HashiCorp)
- Installing integrations/github v4.31.0...
- Installed integrations/github v4.31.0 (signed by a HashiCorp partner, key ID 38027F80D7FD5FB2)
- Installing hashicorp/aws v4.41.0...
- Installed hashicorp/aws v4.41.0 (signed by HashiCorp)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.


Warning: Additional provider information from registry

The remote registry returned warnings for
registry.terraform.io/hashicorp/github:
- For users on Terraform 0.13 or greater, this provider has moved to
integrations/github. Please update your source in required_providers.

It seems that it installs 2 github providers, one from hashicorp and one from integrations… I have changed versions of Terraform/provider during the development, and I have removed any reference to the hashicorp one, but this warning still happens

However, it also happens locally, where it reads the correct state, so I don’t think it is related.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 21 (12 by maintainers)

Most upvoted comments

I’m running workspace with s3 without issues. Though I specified empty backend config, then supply it with secret. I can paste snippet later if you want