terraform-provider-aws: terraform refresh fails for aws_launch_configuration with InvalidAMIID.NotFound

This issue was originally opened by @matt-deboer as hashicorp/terraform#13433. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

0.9.2

Affected Resource(s)

  • aws_launch_configuration

Terraform Configuration Files [example]

...
## get the ami id from latest packer build
data "aws_ami" "widget" {
  filter {
    name = "name"
    values = ["widget-template"]
  }
  owners = ["self"]
}

resource "aws_launch_configuration" "widget" {
  ...
  image_id = ""${data.aws_ami.widget.id}""
  ...
}

Error output:

Error refreshing state: 1 error(s) occurred:

* module.aws_agent.aws_launch_configuration.widget: aws_launch_configuration.widget: InvalidAMIID.NotFound: The image id '[ami-xxxxxxxx]' does not exist
	status code: 400, request id: a892c167-7942-4781-8af8-8f62dc57437a

Expected Behavior

terraform refresh should be able to succeed, even when the AMI associated with the current aws_launch_configuration has been deleted. terraform plan should show the resources (and affected dependencies) as needing change/rebuild.

Actual Behavior

terraform refresh encounters the error mentioned above

Steps to Reproduce

  1. Construct an terraform configuration, using an aws_launch_configuration which pulls it’s image_id from a datasource, as in the example.
  2. terraform apply to create the resources
  3. delete the AMI (replace with a new/updated version)
  4. run terraform refresh on the stack

Important Factoids

Our typical process involves the following steps:

  1. Build AMI with packer, and upload to AWS
  2. Provision ASGs using an aws_ami data source to reference the latest version of the AMI built in packer
  3. More recently, we’ve started cleaning up old AMI’s that are no-longer needed, including some of the AMIs produced in step#1, after we have uploaded a newer version of that AMI. – we did confirm that there is exactly 1 version of the particular AMI present, but none of the previous versions.
  4. We’re able to work around this by using terraform state rm on the affected aws_launch_configuration instances (since they’ll be re-created anyway)

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 28
  • Comments: 34 (4 by maintainers)

Most upvoted comments

I ran into this issue; my workaround was to delete the state for the impacted launch configuration:

terraform state rm module.asg.aws_launch_configuration.widget

When I ran terraform apply, it created a new launch configuration, and I had to manually delete the old one.

I also found the same issue with Terraform v0.12.25

But I have solved the issue with a trick, ==> As I created a local which contains the AMI-ID of the AMI image created by the packer. like this

locals {
  ami_ID = data.aws_ami.<AMI_resource_name>.id
}

Just define the local block in the root module or file, just before using the AWS launch template and after the data block of the aws_ami.

And in the AWS launch template in the image_id use the local variable like this:-


resource "aws_launch_template" "ALT" {
  name = "NEW-ALT"
  image_id      = local.ami_ID
............
}


And this works just great.

looks like this is still an issue; fingers crossed it’s addressed before the issue’s two year anniversary 👍

tried deleting out of state to no avail; looks like an api call is the culprit. going to try a few other ideas.

it would be nice to fix this. it is a pain to fix this once encountered, especially with a large amount of launch configurations. happens as of 0.10.6

Anyone try just deleting the affected launch template from AWS, then re-applying? This “works”, in that terraform will recreate the entire launch template if it’s missing entirely … at least, on an otherwise “healthy” config. ie: I haven’t tested this on the specific target env that has this actual problem. (I wanted to have a proven solution before I did this on the problem target)

Seeing same issue still with:

Terraform v0.12.24
+ provider.aws v2.61.0

Does anyone have updates on this issue?

Still occuring with :

Terraform v0.12.13
+ provider.aws v2.29.0

I occupied clean VPN and tried to create/modify/delete each aws_launch_configuration and AMI. But still failed to reproduce issue…