terraform-local: bug: when configuring tflocal for remote s3 state, live AWS is used
Is there an existing issue for this?
- I have searched the existing issues
Current Behavior
When setting up remote state using dynamoDB and s3, localstack currently behaves as follows:
- Create a bucket and dynamodb table either using awslocal or tflocal.
- Write a terraform.backend block for a simple terraform configuration to reference the backend bucket and table.
terraform {
backend "s3" {
bucket = "backend-bucket-iac-terraform"
key = "terraform.tfstate"
region = "us-west-2"
dynamodb_table = "terraform-state-locking"
# s3_use_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
# skip_requesting_account_id = true
}
}
- Now run
tflocal init
. This will FAIL, saying that the bucket was not found. - Now create the corresponding backend bucket and dynamodb table on live AWS with the same names and regions as you used for localstack.
- Without changing your terraform configuration, run
tflocal init
again. IT WILL SUCCEED, and moreover, the live S3 bucket on AWS will contain your remote state (including the fake ARN refs to the created objects on localstack).
This is behavior is somewhat unexpected π
I have a repo containing the demonstration files for anyone whoβd like to try this out for themselves.
Expected Behavior
Iβd expect tflocal
to look for the bucket and dynamodb table locally, on the localstack mocked db, and not on live AWS π
How are you starting LocalStack?
With the localstack
script
Steps To Reproduce
How are you starting localstack (e.g., bin/localstack
command, arguments, or docker-compose.yml
)
$ localstack start
Client commands (e.g., AWS SDK code snippet, or sequence of βawslocalβ commands)
1. Download my demo repo
2. Follow the instructions in the repo for creating the bucket and dynamodb table set up in the `remote-state` directory.
3. `cd ../dyn1`
4. `tflocal init`
5. Observe the error returned; tflocal did not find the remote state bucket:
Initializing the backend...
β·
β Error: Failed to get existing workspaces: S3 bucket does not exist.
β
β The referenced S3 bucket must have been previously created. If the S3 bucket
β was created within the last minute, please wait for a minute or two and try
β again.
β
β Error: NoSuchBucket: The specified bucket does not exist
β status code: 404, request id: 7WRR8Q21Y5DTWVAV, host id: Zb0SMz3HOprcP8kDQo9QRMJ2YtrbjFqUW0rtWJIWqqnlykGRS1yaZTmZAfS/5aiGVyK9BLbIaV8=
β
β
β
7. Now create a bucket with the same name and region on *live AWS*.
8. Run `tflocal init` again, and observe that init works, and you can apply the configuration as well. The resources will be created on localstash, but the bucket and dynamodb table on *live AWS* will be updated with the remote state.
Environment
- OS: MacOS Big Sur, using Colima for docker.
- LocalStack: 2.1.0
- awscli v2 is configured for us-west-2 with a valid key and secret.
Anything else?
No response
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 16 (2 by maintainers)
Sorry, I forgot to mention the documentation earlier. Yes, this is a workaround and I will move this issue under the https://github.com/localstack/terraform-local
As
tflocal
is a wrapper, you can create your configuration file using the same technique https://developer.hashicorp.com/terraform/language/files/override.@whummer The new version works for my file. Looks good.
@whummer : definitely a problem with the new code. On
tflocal init
with my files (see above for link to the repo), I get python errors:You should be able to reproduce this easily using my steps above.