terraform-provider-docker: docker_registry_image : Provider produced inconsistent final plan

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave “+1” or “me too” comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and docker Provider) Version

Terraform v1.0.5 kreuzwerker/docker v2.15.0

Affected Resource(s)

  • docker_registry_image

Terraform Configuration Files

resource "docker_registry_image" "this" {
  name = "image:${var.REV}"
  build {
    context = "./"
    build_args = {
      REV     = var.REV
      PORT    = var.PORT
    }
  }
}

Debug Output

https://gist.github.com/stevelacy/b807abca095f59486e4587097ab24025

Actual Behaviour

│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.backend.docker_registry_image.this to
│ include new values learned so far during apply, provider
│ "registry.terraform.io/kreuzwerker/docker" produced an invalid new value
│ for .build[0].context: was
│ cty.StringVal("./:d0bf0244333d694ac286b5de97ead1d28344bae08ff63ddb7e9e50fa87d4cff0"),
│ but now
│ cty.StringVal("./:0765d42dc5f3cc826c5702ec60ec5e09b47b6c5395edb0f65b55ad6f38d76ec8").
│
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.

Steps to Reproduce

  1. terraform apply

This error goes away on a subsequent build assuming the image hash is the same.

References

Seems releated to #192 except this time the context folder name is ./

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 18
  • Comments: 16 (6 by maintainers)

Most upvoted comments

Looking at the code, it appends a hash of the context directory’s contents. This bit me because I was referencing the directory containing terraform’s state file, which changed every time I tried to terraform apply. Even worse, I was dynamically generating the Dockerfile so even after working around this, the very first apply fails because there’s no Dockerfile yet to hash.

Workarounds:

  • Put your Dockerfile in a directory without any other files that change when you run terraform.
  • If you need to change/template your Dockerfile at build time, use Docker build args.

Thanks for all the comments! In short: The calculation of the internal state value for build.context is, nicely put, not optimal. It is definitely one of my highest priorities to fix. As I am new to the whole “terraform provider” world, I still need some time digging into the provider code and also learning about the plan/state/resource mapping inside a provider.

My gut feeling tells me that in order to properly fix the “Provider produced inconsistent final plan” issue we would need a major version bump because of breaking changes. But let’s see.

Possible workarounds:

With the version to be released (v2.19.0) it will also finally be possible to have a Dockerfile outside the build context (https://github.com/kreuzwerker/terraform-provider-docker/pull/402) maybe that will help some of you out there.

Yes, still happens.

Still happening

So, I found a quick fix for 2.15.0 . It looks like if tfstate does not exist, then it would fail. Just run terraform apply -auto-approve || terraform apply -auto-approve, which should fix it.

I have not tested any issues in setting the build context path so far, so I cannot commit to that.

This looks related to the issue I made ( #290 ) but I think I’ve found the cause of the problem, which I’ve put in the ‘Actual Behaviour’ section.