terraform-provider-aws: aws_lambda_function ResourceConflictException due to a concurrent update operation

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave “+1” or “me too” comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

Terraform v0.11.7

  • provider.aws v1.26.0
  • provider.template v1.0.0

Affected Resource(s)

aws_lambda_function

Terraform Configuration Files

This is an extract of my main terraform file, it shows all the resources relevant to the issue:

resource "aws_lambda_function" "voucher-validation-lambda" {
  filename = "${var.artifact}"
  function_name = "${var.prefix}voucher-validation"
  publish = true
  role = "${lookup(var.s3_lambda_role, var.target)}"
  handler = "vouchers.validation.VoucherValidationHandler"
  source_code_hash = "${var.artifact}"
  runtime = "java8"
  memory_size = "256"
  timeout = "60"

  vpc_config {
    subnet_ids = ["${data.terraform_remote_state.global.subnet-business-services-1a.id}", "${data.terraform_remote_state.global.subnet-business-services-1b.id}"]
    security_group_ids = ["${data.aws_security_group.default.id}"]
  }
}

resource "aws_lambda_permission" "voucher-validation-lambda-permission" {
  depends_on = [
    "aws_lambda_function.voucher-validation-lambda",
    "aws_api_gateway_rest_api.voucher-validation-api",
    "aws_api_gateway_method.voucher-validation-api-post"
  ]
  statement_id = "AllowExecutionFromAPIGateway"
  action = "lambda:InvokeFunction"
  function_name = "${aws_lambda_function.voucher-validation-lambda.function_name}"
  principal = "apigateway.amazonaws.com"
}

resource "aws_lambda_permission" "voucher-validation-lambda-permission-method" {
  depends_on = [
    "aws_lambda_function.voucher-validation-lambda",
    "aws_lambda_permission.voucher-validation-lambda-permission",
    "aws_api_gateway_rest_api.voucher-validation-api",
    "aws_api_gateway_method.voucher-validation-api-post"
  ]
  statement_id = "AllowExecutionFromAPIGatewayMethod"
  action = "lambda:InvokeFunction"
  function_name = "${aws_lambda_function.voucher-validation-lambda.function_name}"
  principal = "apigateway.amazonaws.com"
  source_arn = "arn:aws:execute-api:${var.region}:${lookup(var.aws_account_id, var.target)}:${aws_api_gateway_rest_api.voucher-validation-api.id}/*/*"
}

resource "aws_lambda_permission" "voucher-validation-lambda-cloudwatch-permission" {
  depends_on = [
    "aws_lambda_function.voucher-validation-lambda",
    "aws_lambda_permission.voucher-validation-lambda-permission-method",
    "aws_cloudwatch_event_target.keep_lambda_hot",
    "aws_cloudwatch_event_rule.every_15_minutes"
  ]
  action = "lambda:InvokeFunction"
  function_name = "${aws_lambda_function.voucher-validation-lambda.function_name}"
  principal = "events.amazonaws.com"
  statement_id = "AllowExecutionFromCloudWatch"
  source_arn = "${aws_cloudwatch_event_rule.every_15_minutes.arn}"

resource "aws_cloudwatch_event_rule" "every_15_minutes" {
  name = "keep-voucher-validation-hot-${uuid()}"
  is_enabled = "true"
  schedule_expression = "rate(15 minutes)"
}

resource "aws_cloudwatch_event_target" "keep_lambda_hot" {
  depends_on = ["aws_lambda_function.voucher-validation-lambda"]
  rule = "${aws_cloudwatch_event_rule.every_15_minutes.name}"
  target_id = "voucher-validation-lambda"
  arn = "${aws_lambda_function.voucher-validation-lambda.arn}"
  input = "{\"keepLambdaHot\": true}"
}

A lot of the depends_on statements were added later to try to solve the issue but it did not really make a difference.

Debug Output

data.terraform_remote_state.global: Refreshing state...
data.terraform_remote_state.logstash: Refreshing state...
aws_cloudwatch_event_rule.every_15_minutes: Refreshing state... (ID: keep-voucher-validation-hot-34c34825-b9b6-6c64-df43-3db1a3f61432)
aws_api_gateway_rest_api.voucher-validation-api: Refreshing state... (ID: ka71rldd07)
aws_api_gateway_resource.voucher-validation-api-resource: Refreshing state... (ID: zn8sfr)
data.aws_security_group.default: Refreshing state...
aws_api_gateway_method.voucher-validation-api-post: Refreshing state... (ID: agm-ka71rldd07-zn8sfr-POST)
aws_lambda_function.voucher-validation-lambda: Refreshing state... (ID: voucher-validation)
aws_api_gateway_integration.voucher-validation-integration: Refreshing state... (ID: agi-ka71rldd07-zn8sfr-POST)
aws_lambda_permission.voucher-validation-lambda-permission: Refreshing state... (ID: AllowExecutionFromAPIGateway)
aws_cloudwatch_event_target.keep_lambda_hot: Refreshing state... (ID: keep-voucher-validation-hot-34c34825-b9...3db1a3f61432-voucher-validation-lambda)
aws_lambda_permission.voucher-validation-lambda-permission-method: Refreshing state... (ID: AllowExecutionFromAPIGatewayMethod)
aws_api_gateway_deployment.voucher-validation-api-deployment: Refreshing state... (ID: k1311i)
aws_lambda_permission.voucher-validation-lambda-cloudwatch-permission: Refreshing state... (ID: AllowExecutionFromCloudWatch)
aws_api_gateway_base_path_mapping.voucher-validation-api-mapping: Refreshing state... (ID: XXX/voucher-validation)
aws_lambda_permission.voucher-validation-lambda-cloudwatch-permission: Destroying... (ID: AllowExecutionFromCloudWatch)
aws_lambda_function.voucher-validation-lambda: Modifying... (ID: voucher-validation)
  filename:         "../voucher-validation-build-20180711-145700.zip" => "../voucher-validation-build-20180711-152240.zip"
  last_modified:    "2018-07-11T13:01:17.858+0000" => "<computed>"
  qualified_arn:    "" => "<computed>"
  source_code_hash: "hmDwZyNt9s62X0Bgihg4sItVG620pssjYblnJykT97c=" => "../voucher-validation-build-20180711-152240.zip"
  version:          "39" => "<computed>"
aws_lambda_permission.voucher-validation-lambda-cloudwatch-permission: Destruction complete after 0s
aws_cloudwatch_event_target.keep_lambda_hot: Destroying... (ID: keep-voucher-validation-hot-34c34825-b9...3db1a3f61432-voucher-validation-lambda)
aws_cloudwatch_event_target.keep_lambda_hot: Destruction complete after 0s
aws_cloudwatch_event_rule.every_15_minutes: Destroying... (ID: keep-voucher-validation-hot-34c34825-b9b6-6c64-df43-3db1a3f61432)
aws_cloudwatch_event_rule.every_15_minutes: Destruction complete after 0s
aws_cloudwatch_event_rule.every_15_minutes: Creating...
  arn:                 "" => "<computed>"
  is_enabled:          "" => "true"
  name:                "" => "keep-voucher-validation-hot-4521cd11-fe17-bcae-f741-9a9b2e167cec"
  schedule_expression: "" => "rate(15 minutes)"
aws_cloudwatch_event_rule.every_15_minutes: Creation complete after 1s (ID: keep-voucher-validation-hot-4521cd11-fe17-bcae-f741-9a9b2e167cec)

Error: Error applying plan:

1 error(s) occurred:

* aws_lambda_function.voucher-validation-lambda: 1 error(s) occurred:

* aws_lambda_function.voucher-validation-lambda: Error modifying Lambda Function Configuration voucher-validation: ResourceConflictException: The function could not be updated due to a concurrent update operation.
	status code: 409, request id: 9280e5d8-850d-11e8-a03f-5d871c0e8cce

Expected Behavior

Should modify the Lambda function with the updated source code without crashing.

Actual Behavior

Throws an error and only works on the second try

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 47
  • Comments: 17 (6 by maintainers)

Commits related to this issue

Most upvoted comments

I have found that terraform tries to create two aws_lambda_permission in the same region concurrently it fails with the error: Error modifying Lambda Function Configuration XXXXX: ResourceConflictException: The function could not be updated due to a concurrent update operation.

I found that that I had the source_arn of my aws_lambda_permission set to my aws_api_gateway_deployment. This resulted in two aws_lambda_permission being destroyed and recreated with every aws_api_gateway_deployment deployment. The solution was to set it to the aws_api_gateway_rest_api so these permissions aren’t updated each time.

The terraform AWS provider should probably take the no more than 1 AWS permission being set at once into account however.

OK, I’ve figured out what’s happening here based on a comment here: AWS has some sort of limit on how many concurrent modifications you can make to a Lambda function. In our code, we had a bunch of resources like this:

resource "aws_lambda_permission" "foo" {
  statement_id  = "foo"
  action        = "lambda:InvokeFunction"
  function_name = "${var.lambda_function_name}"
  principal     = "apigateway.amazonaws.com"
}

resource "aws_lambda_permission" "bar" {
  statement_id  = "bar"
  action        = "lambda:InvokeFunction"
  function_name = "${var.lambda_function_name}"
  principal     = "apigateway.amazonaws.com"
}

resource "aws_lambda_permission" "baz" {
  statement_id  = "baz"
  action        = "lambda:InvokeFunction"
  function_name = "${var.lambda_function_name}"
  principal     = "apigateway.amazonaws.com"
}

It turns out that each of these modifies the Lambda function in the function_name parameter, and by default, Terraform will try to do all these modifications concurrently. This triggers an error. The workaround for now is ugly: you force Terraform to make these changes sequentially by daisy-chaining depends_on calls:

resource "aws_lambda_permission" "foo" {
  statement_id  = "foo"
  action        = "lambda:InvokeFunction"
  function_name = "${var.lambda_function_name}"
  principal     = "apigateway.amazonaws.com"
}

resource "aws_lambda_permission" "bar" {
  statement_id  = "bar"
  action        = "lambda:InvokeFunction"
  function_name = "${var.lambda_function_name}"
  principal     = "apigateway.amazonaws.com"

  depends_on = ["aws_lambda_permission.foo"]
}

resource "aws_lambda_permission" "baz" {
  statement_id  = "baz"
  action        = "lambda:InvokeFunction"
  function_name = "${var.lambda_function_name}"
  principal     = "apigateway.amazonaws.com"

  depends_on = ["aws_lambda_permission.bar"]
}

And today the bug is back. The depends_on fix I mentioned earlier worked reliably for a few days and now, despite no significant code changes, I get the concurrent update operation error every single time.

I’ve now resorted to a crappy workaround: run apply with the -parallelism=1 flag 😢

Adding a create_before_destroy lifecycle rule to my aws_lambda_permission resources seems to have solved this bug for me. The deployment gets updated, then the Lambda function code is updated, then the new Lambda permission is created and the old one destroyed.