terraform-provider-aws: s3: BucketRegionError: incorrect region, the bucket is not in

Community Note

  • Please vote on this issue by adding a šŸ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave ā€œ+1ā€ or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

$ terraform -v
Terraform v1.1.5
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v4.1.0

Affected Resource(s)

  • aws_s3_bucket_acl

Terraform Configuration Files

resource "aws_s3_bucket_acl" "example" {
  bucket = aws_s3_bucket.example.id
  acl    = "private"
}

Debug Output

Error: error getting S3 bucket ACL (backup,private): BucketRegionError: incorrect region, the bucket is not in 'ap-southeast-2' region at endpoint '', bucket is in 'ap-northeast-2' region
│ 	status code: 301

Expected Behavior

I migrated the code to use the new aws_s3_bucket_acl resource (since I previously had the acl=ā€œprivateā€ parameter set but this fails with the 1.1.x release), according to the documentation from here, however when I import the resource into Terraform, for some reason fails and tells me the bucket is on a different region (which is not since I confirmed the bucket has been created on the region ā€˜ap-southeast-2’, not in Korea. Having said that, everything was working well until someone upgraded to the latest release.

Importing the resource should work.

Actual Behavior

It does mention an endpoint that is not used and doesn’t add the resource to the state file.

Steps to Reproduce

  1. terraform init --upgrade
  2. terraform import aws_s3_bucket_acl.example example,private

References

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 14
  • Comments: 23 (9 by maintainers)

Most upvoted comments

Cause

  1. Part of the Terraform AWS Provider v4 is moving toward using AWS SDK for Go v2. That includes starting the process of switching HTTP client and transport. v4 includes using the v2 transport with the v1 client. The v2 transport may work differently when it gets a 301 without a Location header. (Currently, the API returns no Location header and v2 fills in Location: https://amazonaws.com/badhttpredirection.)
  2. S3 is the only service known to return 301 without a Location header. S3 does return X-Amz-Bucket-Region: ap-northeast-2 header but this seems to be the incorrect location.

Although this error seems to be new to you, it has been around for a while with #14544. Sometimes the problem was fixed by rm -rf .terraform and re-initializing, terraform init. Other times it was not.

Solutions

Unfortunately, we don’t have a clear smoking gun and clear path forward. However, we have some ideas that may or may not help.

Idea 1

The AWS provider v4.2 will include HTTP client and transport changes. However, the change will not be dramatic so this is maybe only be a 50/50 or less chance of helping.

Idea 2

As mentioned above, others have found that deleting .terraform directory fixed the problem. The op mentions that this did not fix the problem for him.

Idea 3

It’s possible that you can use a config workaround to avoid the 301 response. This is basically saying, ā€œOkay, AWS. I’ll play your silly game. The bucket is in X region.ā€ You would only add the provider argument to the problematic resource.

provider "aws" {
  alias = "s3-region"
  region = "ap-northeast-2" # the "incorrect" region mentioned in the error
}

resource "aws_s3_bucket_acl "example" {
  provider = aws.s3-region
  # etc. 
}

Unfortunately, since we have not been able to reproduce the problem, we cannot test this idea.

Idea 4

This may be the only true solution even if it is not very satisfying. We recommend that you reach out to AWS Support and raise the problem for the specific bucket. Although this worked before, we have found many times that things accidentally worked before that ā€œbreakā€ as we upgrade different components. But, they really shouldn’t have worked before.

  1. AWS S3 should not be responding without including a Location header. We cannot change that response.
  2. AWS SDK for Go v2 should not be filling in: Location: https://amazonaws.com/badhttpredirectlocation.
  3. AWS S3 is giving inconsistent information for your buckets. The CLI (aws s3api get-bucket-location --bucket yourbucket) gives one result but the API itself is saying that the bucket is in a different region with the X-Amz-Bucket-Region HTTP response header.

Raise these specific issues with AWS Support and see if they can adjust something with the bucket to fix the problem. S3 used to work very differently, and it is possible that through the various upgrades and migrations, some buckets were missed.

After some testing, it appears adding provider = aws.us-west-1 to each new resource solved my issue.

resource "aws_s3_bucket_acl" "custom_bucket_name" {
  provider = aws.us-west-1

  bucket = aws_s3_bucket.custom_bucket_name.id
  acl    = "private"
}

resource "aws_s3_bucket_versioning" "custom_bucket_name" {
  provider = aws.us-west-1

  bucket = aws_s3_bucket.custom_bucket_name.id

  versioning_configuration {
    mfa_delete = "Disabled"
    status     = "Suspended"
  }
}

Cause

  1. Part of the Terraform AWS Provider v4 is moving toward using AWS SDK for Go v2. That includes starting the process of switching HTTP client and transport. v4 includes using the v2 transport with the v1 client. The v2 transport may work differently when it gets a 301 without a Location header. (Currently, the API returns no Location header and v2 fills in Location: https://amazonaws.com/badhttpredirection.)
  2. S3 is the only service known to return 301 without a Location header. S3 does return X-Amz-Bucket-Region: ap-northeast-2 header but this seems to be the incorrect location.

Although this error seems to be new to you, it has been around for a while with #14544. Sometimes the problem was fixed by rm -rf .terraform and re-initializing, terraform init. Other times it was not.

Solutions

Unfortunately, we don’t have a clear smoking gun and clear path forward. However, we have some ideas that may or may not help.

Idea 1

The AWS provider v4.2 will include HTTP client and transport changes. However, the change will not be dramatic so this is maybe only be a 50/50 or less chance of helping.

Idea 2

As mentioned above, others have found that deleting .terraform directory fixed the problem. The op mentions that this did not fix the problem for him.

Idea 3

It’s possible that you can use a config workaround to avoid the 301 response. This is basically saying, ā€œOkay, AWS. I’ll play your silly game. The bucket is in X region.ā€ You would only add the provider argument to the problematic resource.

provider "aws" {
  alias = "s3-region"
  region = "ap-northeast-2" # the "incorrect" region mentioned in the error
}

resource "aws_s3_bucket_acl "example" {
  provider = aws.s3-region
  # etc. 
}

Unfortunately, since we have not been able to reproduce the problem, we cannot test this idea.

Idea 4

This may be the only true solution even if it is not very satisfying. We recommend that you reach out to AWS Support and raise the problem for the specific bucket. Although this worked before, we have found many times that things accidentally worked before that ā€œbreakā€ as we upgrade different components. But, they really shouldn’t have worked before.

  1. AWS S3 should not be responding without including a Location header. We cannot change that response.
  2. AWS SDK for Go v2 should not be filling in: Location: https://amazonaws.com/badhttpredirectlocation.
  3. AWS S3 is giving inconsistent information for your buckets. The CLI (aws s3api get-bucket-location --bucket yourbucket) gives one result but the API itself is saying that the bucket is in a different region with the X-Amz-Bucket-Region HTTP response header.

Raise these specific issues with AWS Support and see if they can adjust something with the bucket to fix the problem. S3 used to work very differently, and it is possible that through the various upgrades and migrations, some buckets were missed.

Idea 3 worked for me!

I figured out my error, and it’s an ID10T error.

I’ve made a gist that encapsulates a full test suite on this that can be used by the use of terraform.tfvars (gitignored in the gist). https://gist.github.com/halostatue/cf1ec2a93a455815813ac51775b13da4

The main point in the Makefile can be shown that I was doing the import incorrectly (target import-wrong):

import-wrong:
	terraform import \
		module.debug_tfstate_bucket.aws_s3_bucket_acl.terraform \
		terraform,private || true

When I imported correctly, everything started working (target import-right):

import-right:
	terraform import \
		module.debug_tfstate_bucket.aws_s3_bucket_acl.terraform \
		debug-terraform-bucket-halostatue,private

The error that I am seeing is definitely user error; it may not be the same issue as @korporationcl. I think that there’s still a bug in that we should be getting a ā€œno such bucketā€ sort of error instead of a ā€œbad regionā€ error.

I also think that there’s documentation improvement that could be done, because the example on the upgrade documentation for imports uses bucket = "bucket", which means that terraform import aws_s3_bucket_acl.bucket bucket, private makes sense, but if the documentation used bucket = "example-bucket", then it would be clearer that the resource name is not the same.

I know this from the few times that I have done resource imports…but dealing with imports properly was the last thing on my mind as I was having to deal with an unplanned major upgrade when moving to AWS 4.x.

Sorry for the wild goose chase on my end, but I do think there are bugs here…just not what I was seeing.

Hey guys - this is also happening with me and my bucket is ā€œonlyā€ 10 months old. The bucket region is eu-west-2 and I am getting exactly the same error as the op. Let me know if I can help testing this any further.

Error: error getting S3 bucket ACL (deeplink,public-read): BucketRegionError: incorrect region, the bucket is not in 'eu-west-2' region at endpoint '', bucket is in 'ap-northeast-1' region
│ 	status code: 301, request id: xxxx, host id: xxxx