aws-cdk: (lambda): Get Function code + layer exceeds maximum size error when it does not

The Lambda layer is being applied before code changes when updating. Then the max size limit of the lambda is reached when deploying. This is either an CDK specific issue or AWS Cloud Formation, I could not find any evidence of this on the internet and consider it to be a fringe case.

Reproduction Steps

I have a deployed lambda function that is 230MB. Then after making changes to the lambda function it is reduced to ~50MB. A lambda layer is applied to the this function ~130MB. The combined size of the lambda + layer is then ~180MB which is less than the 250MB limit. When I try to deploy this I get the following cloudformation error:

...
 20/101 | 1:27:01 PM | UPDATE_FAILED        | AWS::Lambda::Function                | <LambdaFunctionName> (XXX) Function code combined with layers exceeds the maximum allowed size of 262144000 bytes. The actual size is 361955628 bytes. (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: XXX)
...

Which looks like the Layer is applied before the code update (as the new code + layer is less than 250MB), and then fails the size constraint. The actual size it reports is the old code + layer: 230MB + 130MB = 360MB which lead me to this fringe case conclusion.

It deploys when you completely destroy the stack and then use the layer + new code. Which is the same as destroying the lambda function or manually updating the lambda with the new code first and then running the update using cloud formation/CDK.

Environment

  • CLI Version : 1.26.0
  • Framework Version: 1.32.0
  • Node.js Version: v10.16.3
  • OS : Windows & Linux
  • Language (Version): TypeScript

This is šŸ› Bug Report

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Reactions: 10
  • Comments: 20 (7 by maintainers)

Most upvoted comments

We are also still experiencing this issue, and it’s causing quite a bit of pain with our deployments.

Just confirming that I’m still experiencing this issue in 2021. Thought I was going slightly crazy until I found this.

In my experience, I had a few lambda functions that were each very fat, each having a copy of some large shared libraries. Naturally I wanted to refactor to extract the shared libraries into a layer, however after doing so, I ran into this problem where even though my functions were now tiny and the layer was fat, my cdk deploy would tell me that:

Function code combined with layers exceeds the maximum allowed size of 262144000 bytes. The actual size is 296335820 bytes.

Even though what I was seeing in the build was more like:

āÆ du -sh asset.*
4.0K	asset.39c7c0b56d2b94f5320257b13eb8c25532e20918e7f37483d070959f752b3886
4.0K	asset.81c2c95c803b458187259bf4081da3e1fc7cb08551d22f75a12349273555fa49
 93M	asset.ab3b51a3705756fa3e9283340417420048ab6b4d06677dd05c319aa6d1567e95

And this was my whole deployment, so I couldn’t understand how I was breaching the 250 MB limit.

To resolve it, I had to cdk destroy and cdk deploy again which is disappointing.

Steps to reproduce from my experience are therefore something like:

  1. Produce a fat lambda function with a bunch of large libraries (almost breaching the limit)
  2. cdk deploy
  3. Move large libraries out into a layer (almost breaching the limit)
  4. Verify that combined unzipped size does not exceed the limit
  5. cdk deploy

I faced the same issue today and wanted to share my walk around.

I renamed the function that is causing the issue e.g. yourfunctionV2. This creates a new function and removes the old one. This way, you don’t have to manually deploy again to add layers.

Bumping this! I encountered the same issue. It’s very confusing. I had to do 2 deployments to get around this issue

  1. Deploy the ā€œthinā€ lambda WITHOUT the layers (this could lead to runtime errors because of missing layers!)
  2. Deploy the lambda WITH the layers.