claudia: ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:
In our project, Lambda was last deployed successfully by CI with claudia
2021-09-14 ~16:17 CET. There have been no issues earlier.
A next attempt by CI, at 2021-09-15 ~1649 CET failed. Retry attempts failed. Manual attempts via CLI failed. Manual upload, publish, and creation of an alias, worked via the console (but no working version was produced because there was no investment in getting the package right).
Nothing of relevance was changed (always a strong statement, I know). There was no update of claudia
, or related
packages between 2 deploys. A retry of the successful deployed version failed too.
Retries 2021-09-16 ~ 10:10 CET failed again.
The reported error is always the same:
updating configuration lambda.setupRequestListeners
ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:eu-west-1:NNNNN:function:XXXXXXXX
at Object.extractError (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/protocol/json.js:52:27)
at Request.extractError (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/protocol/rest_json.js:55:8)
at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:688:14)
at Request.transition (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:690:12)
at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:688:14)
at Request.transition (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:690:12)
at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
at callNextListener (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:96:12)
at IncomingMessage.onEnd (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/event_listeners.js:313:13)
at IncomingMessage.emit (events.js:412:35)
at IncomingMessage.emit (domain.js:470:12)
at endReadableNT (internal/streams/readable.js:1317:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21) {
code: 'ResourceConflictException',
time: 2021-09-16T08:19:05.924Z,
requestId: 'cf98db8a-0457-4f92-9a68-19b37f326508',
statusCode: 409,
retryable: false,
retryDelay: 45.98667333028396
}
But this happens quick, before the package is build, or after.
We’ve felt for a while that claudia
does some things twice, first checking, and then doing. When the error appears
late, we see several mentions of lambda.setupRequestListeners
loading Lambda config
loading Lambda config sts.getCallerIdentity
loading Lambda config sts.setupRequestListeners
loading Lambda config sts.optInRegionalEndpoint
loading Lambda config lambda.getFunctionConfiguration FunctionName=XXXXXXXX
loading Lambda config lambda.setupRequestListeners
packaging files
packaging files npm pack -q /opt/atlassian/pipelines/agent/build
packaging files npm install -q --no-audit --production
[…]
validating package
validating package removing optional dependencies
validating package npm install -q --no-package-lock --no-audit --production --no-optional
[…]
validating package npm dedupe -q --no-package-lock
updating configuration
updating configuration lambda.updateFunctionConfiguration FunctionName=XXXXXXXX
updating configuration lambda.setupRequestListeners
updating configuration lambda.updateFunctionConfiguration FunctionName=XXXXXXXX
updating configuration lambda.setupRequestListeners
ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:eu-west-1:NNNNNNNN:function:XXXXXXXX
at Object.extractError (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/protocol/json.js:52:27)
at Request.extractError (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/protocol/rest_json.js:55:8)
at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:688:14)
at Request.transition (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:690:12)
at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:688:14)
at Request.transition (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:690:12)
at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
at callNextListener (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:96:12)
at IncomingMessage.onEnd (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/event_listeners.js:313:13)
at IncomingMessage.emit (events.js:412:35)
at IncomingMessage.emit (domain.js:470:12)
at endReadableNT (internal/streams/readable.js:1317:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21) {
code: 'ResourceConflictException',
time: 2021-09-16T08:19:05.924Z,
requestId: 'cf98db8a-0457-4f92-9a68-19b37f326508',
statusCode: 409,
retryable: false,
retryDelay: 45.98667333028396
}
Resources on the internet are barely any help.
AWS Lambda - Troubleshoot invocation issues in Lambda
mentions ResourceConflictException
, but with a different message, and refers to VPCs, which we are not using.
UpdateFunctionConfiguration
,
PublishVersion,
UpdateFunctionCode
and others mention more
generally:
ResourceConflictException
The resource already exists, or another operation is in progress.
HTTP Status Code: 409
Other resources are no help:
- https://discuss.hashicorp.com/t/problem-updating-aws-lambda-function/20597
- https://stackoverflow.com/questions/58971446/resourceconflictexception-the-function-could-not-be-updated
Terraform Error publishing version when lambda using container updates code #17153 (Jan. 2021) mentions a “lock” / “last update status”, which we can watch during execution using
> watch aws --profile YYYYYYY --region eu-west-1 lambda get-function-configuration --function-name XXXXXXXX
The output looks like
{
"FunctionName": "XXXXXXXX",
"FunctionArn": "arn:aws:lambda:eu-west-1:NNNNNNNNNNN:function:XXXXXXXX
"Runtime": "nodejs14.x",
"Role": "arn:aws:iam::NNNNNNNNNNN:role/execution/lambda-execution-XXXXXXXX",
"Handler": "lib/service.handler",
"CodeSize": 76984324,
"Description": "[…]",
"Timeout": 30,
"MemorySize": 2048,
"LastModified": "2021-09-16T08:19:05.000+0000",
"CodeSha256": "zQb6Vss0Zlug46HRjA8+bNe0i1TP6NWfrm70hC6zC90=",
"Version": "$LATEST",
"Environment": {
"Variables": {
"NODE_ENV": "production"
}
},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "9d5f5431-6f2f-4d39-9794-d86778b34446",
"Layers": [
{
"Arn": "arn:aws:lambda:eu-west-1:NNNNNNNNNNN:layer:chrome-aws-lambda:25",
"CodeSize": 51779390
}
],
"State": "Active",
"LastUpdateStatus": "Successful",
"PackageType": "Zip"
}
most of the time, but we see LastUpdateStatus
change for a moment before the error occurs.
Terraform aws_lambda_function ResourceConflictException due to a concurrent update operation #5154 says, in 2018,
OK, I’ve figured out what’s happening here based on a comment here: AWS has some sort of limit on how many concurrent modifications you can make to a Lambda function.
serverless ‘Concurrent update operation’ error for multi-function service results in both deployment and rollback failure. #4964 reports the same issue in 2018, and remarks:
I just heard back from AWS Premium Support, and they offered up a solution and the cause of the issue. It’s not so much an issue with too many functions, as it is trying to do too many updates with a single function.
So, this appears to be a timing issue. Claudia should take it slower?
About this issue
- Original URL
- State: open
- Created 3 years ago
- Reactions: 7
- Comments: 49 (1 by maintainers)
Started for us as well. if someone wants to do quick fix until its fixed in ClaudiaJs, use below around Claudia commands:
v5.14.0 should fix this
@gojko - Still getting this on v5.14.0
@maltahs Your suggestion worked for us too, thanks! To make things easier for others, I also submitted our hotfix branch as a PR to this project (as you can see above).
We just ran into the same issue and found the reason here:
https://aws.amazon.com/de/blogs/compute/coming-soon-expansion-of-aws-lambda-states-to-all-functions/
I believe this was caused by AWS sending back
State
with a capital “S” but Claudia checks for a lower-casestate
so it doesn’t wait for the state to change correctly. I encountered this running tests for #239, and included a fix in that PR.+1 for accepting #230. Resolved the issue for me as well. Thanks @madve2 !!
Started happening to me today as well. updating to the latest ClaudiaJs and AWS-SDK didn’t help. I was however able to mitigate the error by adding the
aws:states:opt-out
in the description of the lambda function or by not passing the environment variables during deploy.Unrelated: I don’t use this library, but looks like the code now has to
waitFor()
the previous function update before running further updates. https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.htmlFor example, first the function is created and then its configuration is updated — has to wait in-between. Same is for updating the function’s *.zip file and then reconfiguring it — has to wait too.
@cathalelliott1 It’s actually not too difficult to manage your Lambda functions manually with the aws-sdk package. If you need a reference, feel free to take a look at the scripts I made for my own project: https://github.com/FIRSTTeam102/ScoringApp-Serverless/tree/master/scripts - the scripts are called with NPM scripts as defined in https://github.com/FIRSTTeam102/ScoringApp-Serverless/blob/master/primary/package.json. Code is licensed with GPLv3 so feel free to use it if it helps. It’s highly specific to the one project, but you can use the same concepts to fit your own use.
The scripts don’t have many comments, but if you are looking into it and want an explainer on how they work, open an issue on our repo and I can answer any questions.
Are there any updates on a new release that would fix this issue (perhaps accepting PR #230?)?
It looks like this is only still an issue if updateConfiguration is called with any options.
For me, removing the
--runtime
argument allowed the function to deploy correctly.@gojko Something similar to the following should be enough to fix this (just copied the existing wait logic up into the updateConfiguration function)
Maybe irrelevant and not applicable, but I solved this by adding the env var to my Cloudformation template instead of during deploy.
+1 for accepting #230
I was able to fix this locally as follows:
src/commands/update.js
file.src/tasks/wait-until-not-pending.js
file to include a new line:await new Promise(resolve => setTimeout(resolve, timeout));
And making the waitUntilNotPending
async
.So now it looks like this:
I am not sure if this is the correct approach, but it works without the need to update the description for now.
FYI: We upgraded the version of the aws provider in one of our terraform sets to the latest version and that seems to have cleared the problem.