terraform-provider-aws: S3 bucket issue: Can't configure a value for "versioning": its value will be decided automatically based on the result of applying this configuration.

Terraform CLI and Terraform AWS Provider Version

terraform script is failing with below error which is running in the provider version : 3.74.1. I have done no change in code base since last 1 year. Facing issue with new release version 4.0.0

Affected Resource(s)

  • aws_s3_bucket

Terraform Configuration Files

Code:

resource "aws_s3_bucket" "secondarybucket" {
  bucket    = "${var.tId}.replicated.${var.Name}"
  provider  = aws.secondary
  force_destroy = true

  versioning {
    enabled = true
  }

Debug Output

Panic Output

 Error: Value for unconfigurable attribute
│
│   with module.buckets.aws_s3_bucket.secondarybucket,
│   on ../modules/terraform-aws-s3buckets/main.tf line 100, in resource "aws_s3_bucket" "secondarybucket":
│  100: resource "aws_s3_bucket" "secondarybucket" {
│
│ Can't configure a value for "versioning": its value will be decided automatically based on the result of applying this configuration.

Expected Behavior

Actual Behavior

Terraform applies policy successfully. And create s3 buckets.

Steps to Reproduce

  1. terraform apply

Important Factoids

References

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 28
  • Comments: 15 (3 by maintainers)

Most upvoted comments

for anyone impacted by this, it might be worth upvoting this issue: https://github.com/hashicorp/terraform-provider-aws/issues/23106 👍

Holy carp, they can’t seriously think that this is an acceptable path forward.

we were able to temporary fix this by pinning module to exact last working version of provider.

required_providers {
    aws = "~> 3.74"
  }

is there any kind of pre-release mailing list in order not to get the surprises like this? i understand, that the other option would be to pin the provider version, but for most of the users it is combined with a lot of overhead

It should be noted that I counted ~12/13 possible arguments that may require running terraform import in order to refactor to support 4.x.

Some fun math for you, if you have 100 buckets you need to refactor, and you use a bunch of the various configuration (versioning, logging, acls, etc.) you’re looking at potentially 700+ terraform import commands needed. This is all the while also coordinating the next release/apply of your TF infrastructure module version including the proper new resources to avoid running into errors/etc. 😃

The versioning argument is read-only as of version 4.0 of the Terraform AWS Provider. See the aws_s3_bucket_versioning resource for configuration details.

we were able to temporary fix this by pinning module to exact last working version of provider.

required_providers {
    aws = "~> 3.74"
  }

Props to @tarunptala for this one. Verified work-around works great till this issue is fixed.

I am getting multiple errors:

Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.aws_s3_bucket,
│   on main.tf line 1, in resource "aws_s3_bucket" "aws_s3_bucket":
│    1: resource "aws_s3_bucket" "aws_s3_bucket" {
│
│ Can't configure a value for "versioning": its value will be decided automatically based on the result of applying this configuration.
╵
╷
│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.aws_s3_bucket,
│   on main.tf line 1, in resource "aws_s3_bucket" "aws_s3_bucket":
│    1: resource "aws_s3_bucket" "aws_s3_bucket" {
│
│ Can't configure a value for "server_side_encryption_configuration": its value will be decided automatically based on the result of applying this configuration.
╵
╷
│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.aws_s3_bucket,
│   on main.tf line 4, in resource "aws_s3_bucket" "aws_s3_bucket":
│    4:   acl           = var.s3bucket_acl
│
│ Can't configure a value for "acl": its value will be decided automatically based on the result of applying this configuration.