terraform-provider-aws: Cannot delete launch configuration because it is attached to AutoScalingGroup

Community Note

  • Please vote on this issue by adding a πŸ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave β€œ+1” or β€œme too” comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

terraform -v
Terraform v0.11.13
+ provider.aws v2.7.0
+ provider.null v2.1.1
+ provider.random v2.1.1

Affected Resource(s)

  • aws_autoscaling_group
  • aws_launch_configuration

Terraform Configuration Files

directory layout

.
β”œβ”€β”€ main.tf
β”œβ”€β”€ modules
β”‚Β Β  β”œβ”€β”€ app
β”‚Β Β  └── terraform-aws-autoscaling

modules/app/main.tf

variable "private_subnets" {
  type    = "list"
}

variable "name" {}
variable "userdata" {}

module "app_asg" {
  source = "../terraform-aws-autoscaling"

  name          = "${var.name}"
  lc_name       = "${var.name}"
  asg_name      = "${var.name}"
  image_id      = "ami-0e219142c0bee4a6e"
  instance_type = "t2.micro"

  root_block_device = [
    {
      volume_size           = "10"
      volume_type           = "gp2"
      delete_on_termination = true
    },
  ]

  user_data                    = "${var.userdata}"
  vpc_zone_identifier          = ["${var.private_subnets}"]
  health_check_type            = "EC2"
  min_size                     = 1
  max_size                     = 1
  desired_capacity             = 1
  wait_for_capacity_timeout    = 0
  recreate_asg_when_lc_changes = true
}

main.tf

First apply with this configuration:

provider "aws" {
  region = "eu-west-1"
}

module "app-001" {
  source   = "modules/app"
  name     = "app-001"
  userdata = "echo hello there version 1"
}

Second apply with this configuration:

provider "aws" {
  region = "eu-west-1"
}

module "app-001" {
  source   = "modules/app"
  name     = "app-001"
  userdata = "echo hello there version 2" ## <- just changed this
}

Third apply with this configuration:

provider "aws" {
  region = "eu-west-1"
}

# module "app-001" {
#   source   = "modules/app"
#   name     = "app-001"
#   userdata = "echo hello there version 2" ## <- just changed this
# }

Debug Output

Panic Output

No

Expected Behavior

Terraform to apply the configuration and delete the commented out resources successfully.

Actual Behavior

Got an error:

module.app-001.app_asg.aws_autoscaling_group.this: Still destroying... (ID: app-001-tight-bengal-20190430102834949100000002, 1m0s elapsed)
module.app-001.app_asg.aws_autoscaling_group.this: Still destroying... (ID: app-001-tight-bengal-20190430102834949100000002, 1m10s elapsed)
module.app-001.module.app_asg.aws_autoscaling_group.this: Destruction complete after 1m15s

Error: Error applying plan:

1 error(s) occurred:

* module.app-001.module.app_asg.aws_launch_configuration.this (destroy): 1 error(s) occurred:

* aws_launch_configuration.this: error deleting Autoscaling Launch Configuration (app-001-20190430102834111500000001): ResourceInUse: Cannot delete launch configuration app-001-20190430102834111500000001 because it is attached to AutoScalingGroup app-001-tight-bengal-20190430102834949100000002
	status code: 400, request id: 2f89b9c4-6b33-11e9-8d9d-711c4d69f590

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Steps to Reproduce

  1. terraform apply
  2. change configuration as stated above
  3. terraform apply
  4. change configuration as stated above, so we delete the module
  5. terraform apply

Important Factoids

References

Opening a new issue, as the original one was closed.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 70
  • Comments: 16 (3 by maintainers)

Commits related to this issue

Most upvoted comments

I ran into this error as well and updated the LC/ASG resources as suggested by these Terraform docs.

Summary:

  1. Add lifecycle block with create_before_destroy = true in both LC and ASG resources
  2. For LC resource, use name_prefix instead of name

Terraform and AWS provider versions:

  • Terraform v0.12.3
  • AWS 2.16

This is a snippet from my working config in case anyone is interested (var.create_before_destroy --> true):

######
# Main Cluster
######

resource "aws_ecs_cluster" "main" {
  name = var.main_cluster_name
}

## Lanch Configuration / Auto Scaling Group

resource "aws_launch_configuration" "main" {
  associate_public_ip_address = var.lc_main_associate_public_ip_address
  enable_monitoring           = var.lc_main_enable_monitoring
  iam_instance_profile        = var.lc_main_iam_instance_profile
  image_id                    = data.aws_ami.amazon_ecs_v2.id
  instance_type               = var.lc_main_instance_type
  key_name                    = var.lc_main_keypair_name
  name_prefix                 = var.lc_main_name
  security_groups             = var.lc_main_security_groups

  user_data = templatefile("${path.module}/templates/ecs_container_instance_userdata.tmpl", {
    cluster_name = var.lc_main_cluster_name,
    efs_id       = var.lc_efs_id,
    region       = data.aws_region.current
  })

  lifecycle {
    create_before_destroy = var.lc_main_create_before_destroy
  }
}

resource "aws_autoscaling_group" "main" {
  name                 = var.asg_main_name
  launch_configuration = aws_launch_configuration.main.name
  vpc_zone_identifier  = var.asg_private_subnet_ids

  desired_capacity = var.asg_main_desired_capacity
  max_size         = var.asg_main_maximum_size
  min_size         = var.asg_main_minimum_size

  lifecycle {
    create_before_destroy = var.asg_main_create_before_destroy
  }
}

The punch line of this bug: aws_launch_configuration’s attribute name breaks the aws_autoscaling_group dependency chain, when interpolated.

Anytime aws_launch_configuration has a change, it needs to be recreated: (new resource required). Knowing aws_launch_configuration is immutable and their names need to be unique per region, interpolating name into aws_autoscaling_group should always force a new resource if it’s being destroyed / created.

Now the work around is to use name_prefix in aws_launch_configuration instead, as aws_autoscaling_group recognizes this interpolated change, and is able to update without destroying aws_autoscaling_group (saving running instances in the process). EDIT: And adding create_before_destroy = true.

Still an issue two years later …

We’ve started migrating from Launch Configurations to Launch Templates. We have to run TF twice to get the Launch Configurations to delete. I suspect these issues are related, but I can open a new issue if it’s requested.

TL;DWrite would be:

  1. make vpc
  2. make subnet
  3. make Launch Config
  4. make ASG
  5. attach lc to asg
  6. run TF APPLY
  7. make Launch Template
  8. delete lc
  9. attach LT to ASG
  10. run TF APPLY, receive error:
* aws_launch_configuration.12t_launch_configuration (deposed #0): 1 error(s) occurred:
* aws_launch_configuration.12t_launch_configuration (deposed #0): ResourceInUse: Cannot delete launch configuration terraform-20190626061640500900000008 because it is attached to AutoScalingGroup 12t_staging_asg
status code: 400, request id: b5315459-996b-11e9-8016-99e5310fc359

Is there a work around for this?