terraform-provider-aws: terraform refresh fails for aws_launch_configuration with InvalidAMIID.NotFound
This issue was originally opened by @matt-deboer as hashicorp/terraform#13433. It was migrated here as part of the provider split. The original body of the issue is below.
Terraform Version
0.9.2
Affected Resource(s)
- aws_launch_configuration
Terraform Configuration Files [example]
...
## get the ami id from latest packer build
data "aws_ami" "widget" {
filter {
name = "name"
values = ["widget-template"]
}
owners = ["self"]
}
resource "aws_launch_configuration" "widget" {
...
image_id = ""${data.aws_ami.widget.id}""
...
}
Error output:
Error refreshing state: 1 error(s) occurred:
* module.aws_agent.aws_launch_configuration.widget: aws_launch_configuration.widget: InvalidAMIID.NotFound: The image id '[ami-xxxxxxxx]' does not exist
status code: 400, request id: a892c167-7942-4781-8af8-8f62dc57437a
Expected Behavior
terraform refresh should be able to succeed, even when the AMI associated with the current aws_launch_configuration has been deleted.
terraform plan should show the resources (and affected dependencies) as needing change/rebuild.
Actual Behavior
terraform refresh encounters the error mentioned above
Steps to Reproduce
- Construct an terraform configuration, using an
aws_launch_configurationwhich pulls it’simage_idfrom a datasource, as in the example. terraform applyto create the resources- delete the AMI (replace with a new/updated version)
- run
terraform refreshon the stack
Important Factoids
Our typical process involves the following steps:
- Build AMI with packer, and upload to AWS
- Provision ASGs using an
aws_amidata source to reference the latest version of the AMI built in packer - More recently, we’ve started cleaning up old AMI’s that are no-longer needed, including some of the AMIs produced in step#1, after we have uploaded a newer version of that AMI. – we did confirm that there is exactly 1 version of the particular AMI present, but none of the previous versions.
- We’re able to work around this by using
terraform state rmon the affectedaws_launch_configurationinstances (since they’ll be re-created anyway)
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 28
- Comments: 34 (4 by maintainers)
I ran into this issue; my workaround was to delete the state for the impacted launch configuration:
When I ran
terraform apply, it created a new launch configuration, and I had to manually delete the old one.I also found the same issue with
Terraform v0.12.25But I have solved the issue with a trick, ==> As I created a local which contains the AMI-ID of the AMI image created by the packer. like this
Just define the local block in the root module or file, just before using the AWS launch template and after the data block of the aws_ami.
And in the AWS launch template in the image_id use the local variable like this:-
And this works just great.
looks like this is still an issue; fingers crossed it’s addressed before the issue’s two year anniversary 👍
tried deleting out of state to no avail; looks like an api call is the culprit. going to try a few other ideas.
it would be nice to fix this. it is a pain to fix this once encountered, especially with a large amount of launch configurations. happens as of 0.10.6
Anyone try just deleting the affected launch template from AWS, then re-
applying? This “works”, in that terraform will recreate the entire launch template if it’s missing entirely … at least, on an otherwise “healthy” config. ie: I haven’t tested this on the specific target env that has this actual problem. (I wanted to have a proven solution before I did this on the problem target)Seeing same issue still with:
Does anyone have updates on this issue?
Still occuring with :
I occupied clean VPN and tried to create/modify/delete each
aws_launch_configurationand AMI. But still failed to reproduce issue…