terraform-provider-aws: ValidationError: Trying to update too many Load Balancers/Target Groups at once. The limit is 10

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave “+1” or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

terraform -v Terraform v0.12.18

  • provider.aws v2.42.0

Affected Resource(s)

  • aws_autoscaling_attachment

Terraform Configuration Files

provider "aws" {
  region  = "us-east-1"
  version = "~>2.42.0"
}

variable "ami_id" {
    type = string
    default = "ami-055c10ae78f3a58a2"
    #default = "ami-028be67c2aa2f1ce1"
}

variable "vpc_zone_identifier" {
  default = ["subnet-04683ec0b1b1992fc"] #my test subnet
}

variable "vpc_id" {
  default = "vpc-031156f8fcca6f558" #my test vpc
}

variable "ports" {
  type = list(string)
  default = [
    "80",
    "81",
    "82",
    "83",
    "84",
    "85",
    "86",
    "87",
    "88",
    "89",
    "90",
    "91",
    "91",
    "93",
    "94",
    "95",
    "96",
    "97",
    "98",
    "99",
  ]
}


resource "aws_launch_configuration" "launch_config" {
  name_prefix                 = "lc-teste"
  image_id                    = var.ami_id
  instance_type               = "t2.micro"

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_autoscaling_group" "as_group" {
  name                      = "${aws_launch_configuration.launch_config.name}-asg"
  launch_configuration      = aws_launch_configuration.launch_config.name
  max_size                  = "1"
  min_size                  = "1"
  desired_capacity          = "1"
  vpc_zone_identifier       = var.vpc_zone_identifier
}


resource "aws_lb" "lb" {
  name                             = "load-balance"
  subnets                          = var.vpc_zone_identifier
  load_balancer_type               = "network"
}


resource "aws_lb_target_group" "lb_target_group" {
  count                = length(var.ports)
  port                 = var.ports[count.index]
  vpc_id               = var.vpc_id
  protocol             = "TCP"
  target_type          = "instance"

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_lb_listener" "lb_listener" {
  count             = length(var.ports)
  load_balancer_arn = aws_lb.lb.arn
  port              = var.ports[count.index]
  protocol          = "TCP"

  default_action {
    target_group_arn = aws_lb_target_group.lb_target_group[count.index]["arn"]
    type             = "forward"
  }

  lifecycle {
    create_before_destroy = false
  }
}

resource "aws_autoscaling_attachment" "asg_attachment" {
  count                  = length(var.ports)
  autoscaling_group_name = aws_autoscaling_group.as_group.name
  alb_target_group_arn   = aws_lb_target_group.lb_target_group[count.index]["arn"]
}

Expected Behavior

Terraform should create the NLB, listerners[20], targets groups[20], autoscaling_group, launch configuration and autoscaling_attachment[20]

Actual Behavior

Terraform no complete
Error: Failure attaching AutoScaling Group lc-teste20191214225150423700000009-asg with ALB Target Group: arn:aws:elasticloadbalancing:us-east-1:106431551699:targetgroup/tf-2019121422515268710000000a/080dbbc407f8c918: ValidationError: Trying to update too many Load Balancers/Target Groups at once. The limit is 10
	status code: 400, request id: 753608db-1ec4-11ea-b6ba-67bd3222d77c

  on main.tf line 102, in resource "aws_autoscaling_attachment" "asg_attachment":
 102: resource "aws_autoscaling_attachment" "asg_attachment" {

Error: Failure attaching AutoScaling Group lc-teste20191214225150423700000009-asg with ALB Target Group: arn:aws:elasticloadbalancing:us-east-1:106431551699:targetgroup/tf-20191214225149124400000005/bcb4ef6045a6a129: ValidationError: Trying to update too many Load Balancers/Target Groups at once. The limit is 10
	status code: 400, request id: 7652ea9a-1ec4-11ea-a375-49e242c6fa68

  on main.tf line 102, in resource "aws_autoscaling_attachment" "asg_attachment":
 102: resource "aws_autoscaling_attachment" "asg_attachment" {

Steps to Reproduce

  1. terraform apply

Important Factoids

References

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 9
  • Comments: 21 (4 by maintainers)

Most upvoted comments

For me what worked was adding a count-dependent wait for creation / destruction (I know that the solution looks ugly but at least I didn’t split the list into chunks nor I had to add the time_sleep).

resource "aws_autoscaling_attachment" "my_asg_attachment" {
  count = length(local.my_local_list)

  autoscaling_group_name = var.workers_asg_name
  lb_target_group_arn    = aws_lb_target_group.my_nlb_tg[count.index].arn

  provisioner "local-exec" {
    interpreter = ["bash", "-c"]
    command     = "echo \"waiting for $(( 30 + 2 * ${count.index} )) seconds .. \" && sleep $(( 30 + 2 * ${count.index} ))"
  }
  provisioner "local-exec" {
    when = destroy
    interpreter = ["bash", "-c"]
    command     = "echo \"waiting for $(( 30 + 2 * ${count.index} )) seconds .. \" && sleep $(( 30 + 2 * ${count.index} ))"
  }
}

Note that I have a list creating / may destroy ~21 attachments. It might be the case that for bigger lists, the values of the wait / sleep need to be tweaked.

@chernetskyi thanks for sharing +1… what if I have count set and basically my attachments are iterated over dynamically? … is there a variant of your solution that would work with counts? … as far as I know depends_on can’t take something like aws_autoscaling_attachment.my_thing[count.index]

Then you should limit your resources with count to 10 attachments per one and depend on the resource with 10 attachments.

Still happening in v4.18.0.

Hey y’all 👋 Thank you for taking the time to file this issue and for the additional discussion around it. Given that there’s been a number of AWS provider releases since the last update, can anyone confirm whether you’re still experiencing this issue?