terraform-provider-aws: force_new_deployment argument for aws_ecs_service resources doesn't "force new deployment on each apply"
Community Note
- Please vote on this issue by adding a š reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave ā+1ā or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform Version
Terraform v0.12.24
- provider.aws v2.66.0
Affected Resource(s)
- aws_ecs_service
Terraform Configuration Files
resource "aws_ecs_service" "main" {
name = var.service_name
cluster = var.ecs_cluster_module_ecs_arn
task_definition = aws_ecs_task_definition.main.arn
desired_count = 1
deployment_minimum_healthy_percent = 100
deployment_maximum_percent = 200
force_new_deployment = true
scheduling_strategy = "REPLICA"
deployment_controller {
type = "ECS"
}
capacity_provider_strategy {
capacity_provider = "FARGATE"
base = 0
weight = 1
}
network_configuration {
security_groups = [aws_security_group.main.id]
subnets = var.service_subnet_ids
}
enable_ecs_managed_tags = true
propagate_tags = "TASK_DEFINITION"
tags = var.tags
}
Expected Behavior
Enabling the force_new_deployment option in aws_ecs_service should force service re-deployment on each terraform apply, even if there is no change in the resource configuration. In this way, even if you are in dev mode with always the same Docker tag for your container image (ex: ālatestā), it will force ECS to pull again the Docker image.
From AWS API documentation:
forceNewDeployment: Whether to force a new deployment of the service. Deployments are not forced by default. You can use this option to trigger a new deployment with no service definition changes. For example, you can update a service's tasks to use a newer Docker image with the same image/tag combination (my_image:latest) or to roll Fargate tasks onto a newer platform version.
Actual Behavior
There is no difference with or without the force_new_deployment option enabled
Steps to Reproduce
terraform applyterraform apply
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 58
- Comments: 15 (2 by maintainers)
Commits related to this issue
- r/aws_ecs_service: add triggers attribute to force update in-place closes #13931 #13528 — committed to obataku/terraform-provider-aws by obataku 2 years ago
- r/aws_ecs_service: add triggers attribute to force update in-place closes #13931 #13528 — committed to hashicorp/terraform-provider-aws by obataku 2 years ago
As an alternative work around, you can use the
-replaceoption ofapplyto force replacement of the task definition, which appears to force the tasks to redeploy.eg
terraform apply -replace="aws_ecs_task_definition.myapp"If you take a look to the module code, you will see that, at the end, the value provided to the Terraform aws_ecs_service resource is true/false.
Anyway, the problem remains the same the parameter does not force the deployment on each apply.
@sljinletās read of the AWS documentation has it exactly backwards. The entire point of Force New Deployment is to update TASKS without updating the TASK DEFINITION.
From the
update-servicesubcommand documentation for AWS CLIaws ecsā a wrapper around the API:Having to find ways to hack around this is incredibly frustrating.
I get what is wanted here and why, but I donāt know that it makes sense for adding or changing the value of the āforce_new_deploymentā option on a āaws_ecs_serviceā Terraform object to do anything but what itās doing nowā¦changing the value of that option on the matching AWS resource. IMO, what behavior you get when you change the value of that option on the AWS resource should be up to AWS, not to Terraform. I believe that Terraform is behaving correctly.
To get the desired behavior irrespective of Terraform, what you want to do is set this flag and then also create new revision(s) of the task (s)in question. What the āforce_new_deploymentā option says is to redeploy tasks automatically when a new revision of a running task is created. If the flag is off, the new revision of the task will not be immediately deployed but will be used when tasks are later created for whatever reason. In either case, nothing will happen if you donāt update the task(s) with new revision(s).
The quoted AWS documentation supports this idea when it says āFor example, you can update a serviceās tasks to use a newer Docker image with the same image/tag combinationā. You have to āupdate a serviceās tasksā to cause the new image to be deployed.
Interestingly, if you create
aws_ecs_servicewithout theforce_new_deploymentattribute, you can subsequently add it in a laterterraform applyas eithertrueorfalseand the resource will be re-created.Seems like this is a duplicate of https://github.com/hashicorp/terraform-provider-aws/issues/13528 You can use a workaround posted here until itās resolved. https://github.com/hashicorp/terraform-provider-aws/issues/13528#issuecomment-797631866
This is causing my team problems as well- it looks like the force apply option isnāt triggering a state drift so it assumes nothing needs to be changed, but when this option is enabled there should always been a change detected.