terraform-provider-aws: force_new_deployment argument for aws_ecs_service resources doesn't "force new deployment on each apply"

Community Note

  • Please vote on this issue by adding a šŸ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave ā€œ+1ā€ or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

Terraform v0.12.24

  • provider.aws v2.66.0

Affected Resource(s)

  • aws_ecs_service

Terraform Configuration Files

resource "aws_ecs_service" "main" {
  name                               = var.service_name
  cluster                            = var.ecs_cluster_module_ecs_arn
  task_definition                    = aws_ecs_task_definition.main.arn
  desired_count                      = 1
  deployment_minimum_healthy_percent = 100
  deployment_maximum_percent         = 200
  force_new_deployment               = true
  scheduling_strategy                = "REPLICA"
  deployment_controller {
    type = "ECS"
  }
  capacity_provider_strategy {
    capacity_provider = "FARGATE"
    base              = 0
    weight            = 1
  }
  network_configuration {
    security_groups  = [aws_security_group.main.id]
    subnets          = var.service_subnet_ids
  }
  enable_ecs_managed_tags = true
  propagate_tags          = "TASK_DEFINITION"
  tags                    = var.tags
}

Expected Behavior

Enabling the force_new_deployment option in aws_ecs_service should force service re-deployment on each terraform apply, even if there is no change in the resource configuration. In this way, even if you are in dev mode with always the same Docker tag for your container image (ex: ā€œlatestā€), it will force ECS to pull again the Docker image. From AWS API documentation: forceNewDeployment: Whether to force a new deployment of the service. Deployments are not forced by default. You can use this option to trigger a new deployment with no service definition changes. For example, you can update a service's tasks to use a newer Docker image with the same image/tag combination (my_image:latest) or to roll Fargate tasks onto a newer platform version.

Actual Behavior

There is no difference with or without the force_new_deployment option enabled

Steps to Reproduce

  1. terraform apply
  2. terraform apply

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 58
  • Comments: 15 (2 by maintainers)

Commits related to this issue

Most upvoted comments

As an alternative work around, you can use the -replace option of apply to force replacement of the task definition, which appears to force the tasks to redeploy.

eg terraform apply -replace="aws_ecs_task_definition.myapp"

you have a look over here , https://registry.terraform.io/modules/infrablocks/ecs-service/aws/3.0.0?tab=inputs

force_new_deployment is yes / no instead of true - false

If you take a look to the module code, you will see that, at the end, the value provided to the Terraform aws_ecs_service resource is true/false.

Anyway, the problem remains the same the parameter does not force the deployment on each apply.

@sljinlet’s read of the AWS documentation has it exactly backwards. The entire point of Force New Deployment is to update TASKS without updating the TASK DEFINITION.

The quoted AWS documentation supports this idea when it says ā€œFor example, you can update a service’s tasks to use a newer Docker image with the same image/tag combinationā€. You have to ā€œupdate a service’s tasksā€ to cause the new image to be deployed.

From the update-service subcommand documentation for AWS CLI aws ecs – a wrapper around the API:

Note If your updated Docker image uses the same tag as what is in the existing task definition for your service (for example, my_image:latest ), you don’t need to create a new revision of your task definition. You can update the service using the forceNewDeployment option. The new tasks launched by the deployment pull the current image/tag combination from your repository when they start.

Having to find ways to hack around this is incredibly frustrating.

I get what is wanted here and why, but I don’t know that it makes sense for adding or changing the value of the ā€˜force_new_deployment’ option on a ā€˜aws_ecs_service’ Terraform object to do anything but what it’s doing now…changing the value of that option on the matching AWS resource. IMO, what behavior you get when you change the value of that option on the AWS resource should be up to AWS, not to Terraform. I believe that Terraform is behaving correctly.

To get the desired behavior irrespective of Terraform, what you want to do is set this flag and then also create new revision(s) of the task (s)in question. What the ā€˜force_new_deployment’ option says is to redeploy tasks automatically when a new revision of a running task is created. If the flag is off, the new revision of the task will not be immediately deployed but will be used when tasks are later created for whatever reason. In either case, nothing will happen if you don’t update the task(s) with new revision(s).

The quoted AWS documentation supports this idea when it says ā€œFor example, you can update a service’s tasks to use a newer Docker image with the same image/tag combinationā€. You have to ā€œupdate a service’s tasksā€ to cause the new image to be deployed.

Interestingly, if you create aws_ecs_service without the force_new_deployment attribute, you can subsequently add it in a later terraform apply as either true or false and the resource will be re-created.

This is causing my team problems as well- it looks like the force apply option isn’t triggering a state drift so it assumes nothing needs to be changed, but when this option is enabled there should always been a change detected.