pulumi-eks: Changing the instanceType in an existing NodeGroup results in a 400 failure to delete the LaunchConfig since it is attached to an active ASG
When a NodeGroup is stood up with a given instance type, e.g. t2.medium
, and then on a future update is changed to say t3.large
, results in the following error:
Diagnostics:
aws:ec2:LaunchConfiguration (update-existing-nodegroup-ng-2-ondemand-large-nodeLaunchConfiguration):
error: Plan apply failed: deleting urn:pulumi:dev1::update-existing-nodegroup::eks:index:NodeGroup$aws:ec2/launchConfiguration:LaunchConfiguration::update-existing-nodegroup-ng-2-ondemand-large-nodeLaunchConfiguration:
error deleting Autoscaling Launch Configuration (update-existing-nodegroup-ng-2-ondemand-large-nodeLaunchConfiguration-d0932eb):
ResourceInUse: Cannot delete launch configuration update-existing-nodegroup-ng-2-ondemand-large-nodeLaunchConfiguration-d0932eb because it is attached to AutoScalingGroup update-existing-nodegroup-ng-2-ondemand-large-6410fe15-NodeGroup-1DIVWWS4FCMIU
status code: 400, request id: f7bfd557-9505-11e9-b696-8ff9971bc5b3
See:
- Issue in https://github.com/terraform-providers/terraform-provider-aws/issues/8485
- TF work-around, but does not work in
pulumi/eks
as we do not exposenamePrefix
as an opt inaws.ec2.LaunchConfiguration
- Changing the
name
of the LaunchConfig resulted in the same error
Manual clean up of the LaunchConfig in the state snapshot and AWS seems to be the only mitigation I’ve found.
cc @jen20 @lukehoban
About this issue
- Original URL
- State: open
- Created 5 years ago
- Reactions: 5
- Comments: 22 (7 by maintainers)
We have the same issue, Pulumi create the new LaunchConfiguration then tries to delete the old before replacing with the new, so it fails.
Are there any recommended work-arounds?
When this happens, I usually go to the parent autoscaling group in the AWS console and change the link to the launch configuration here. And then re-run the Pulumi job with a refesh…
Not so good, but the best I have seen so far.
We have had this issue as well. Any updates please?
I consider rewriting the pulumi_eks stuff to plain pulumi_aws to work around this. Then I will have finer control of when the launch configuration needs to be recreated - which is just about never as we use SpotInst for al that.
Hi, any update on this? I’m having the same issue.
Seeing this every time I try to do a change to my EKS