aws-cdk: lambda: corrupt zip archive asset produced when using node v15.6
Update from the CDK team
When using local assets with the lambda module using the Code.fromAsset() API, deployment of new assets (during cdk deploy) will fail with the error - “Uploaded file must be a non-empty zip”
We have confirmed this issue manifests when using node engine v15.6. The root cause is being tracked by node - https://github.com/nodejs/node/issues/37027. For the time being, use a node engine < 15.6.
If you have encountered this issue, you need to do three things to resolve this -
- Downgrade to a node version <
15.6. You can see what you’re current node version is by runningnode --version. - Delete the cached asset in the
cdk.out/directory. You could safely delete the entire directory, and the correct assets and templates will be re-generated during the next run ofcdk synthorcdk deploy. - Delete the previously uploaded asset from the staging bucket. The S3 key of the asset will be prefixed with
asset/and have the hash of the asset. The hash of the asset can be found in your application’s CloudFormation template. The staging bucket can be obtained by running the following command -
aws cloudformation describe-stack-resources --stack-name CDKToolkit --logical-resource-id StagingBucket --query 'StackResources[].PhysicalResourceId'
Original issue filed below
When doing
const fillFn = new lambda.Function(this, "Zip2GzipSQSFill", {
code: lambda.Code.fromAsset("lib/python"),
runtime: lambda.Runtime.PYTHON_3_7,
handler: "mylambda.handler"
}
);
my file is here (relative to root of the CDK project): lib/python/mylambda.py
its contents:
def handler(event, context):
print("HELLO AWS")
I get the error:
Uploaded file must be a non-empty zip (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: fcfaf553-70d3-40b8-85d2-a15f6c3bcef0; Proxy: null)
Reproduction Steps
run cdk deploy
What did you expect to happen?
the Lambda is created in AWS
What actually happened?
An zip file is uploaded to the CDK staging bucket, there is a zip file there with my python file, but that file has no contents
Environment
- CDK CLI Version : 1.85.0 or 1.83.0
- Framework Version: 1.83.0
- Node.js Version: 15.6.0
- OS : MacOS
- Language (Version): TypeScript (4.1.3)
Other
I’m suspecting it’s a nodejs/library problem, in that some library is producing this invalid zip file but I have no evidence of this.
This is 🐛 Bug Report
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 61
- Comments: 130 (58 by maintainers)
Commits related to this issue
- fix: corrupted zip archive when using node engine 15.6.0 Explicitly denylist the node engine 15.6.0 that causes corrupted zip archive. see #12536 for details — committed to aws/aws-cdk by deleted user 3 years ago
- follow fix from cdk https://github.com/aws/aws-cdk/issues/12536 — committed to awslabs/aws-bootstrap-kit by deleted user 3 years ago
- chore: npm-check-updates && yarn upgrade (#12911) Ran npm-check-updates and yarn upgrade to keep the `yarn.lock` file up-to-date. Closes https://github.com/aws/aws-cdk/issues/12536 — committed to aws/aws-cdk by aws-cdk-automation 3 years ago
- chore: npm-check-updates && yarn upgrade (#12911) Ran npm-check-updates and yarn upgrade to keep the `yarn.lock` file up-to-date. Closes https://github.com/aws/aws-cdk/issues/12536 — committed to NovakGu/aws-cdk by aws-cdk-automation 3 years ago
- Reduce version of node ref aws/aws-cdk/issues/12536. — committed to NASA-IMPACT/hls-orchestration by sharkinsspatial 3 years ago
- Merge pull request #127 from NASA-IMPACT/ci_deployment Reduce version of node ref aws/aws-cdk/issues/12536. — committed to NASA-IMPACT/hls-orchestration by sharkinsspatial 3 years ago
- typo, use node v12 due to https://github.com/aws/aws-cdk/issues/12536 — committed to rajyan/AC-alert by rajyan 3 years ago
- chore: npm-check-updates && yarn upgrade (#12911) Ran npm-check-updates and yarn upgrade to keep the `yarn.lock` file up-to-date. Closes https://github.com/aws/aws-cdk/issues/12536 — committed to cdklabs/decdk by aws-cdk-automation 3 years ago
Hi everyone, sharing my conclusions here as well:
crc32-stream>=4.0.2OR Node <=15.5.0crc32-stream<4.0.2AND Node >15.5.0I know some folks have been reporting conflicting results, before we dive deeper into that, I wanted to clarify something about the subtleties of asset caches. It might explain some of this behavior.
Note that its not sufficient to remove only the asset your code is creating, but also the assets created by the framework.
For example, the
BucketDeploymentconstruct bundles its own asset (that contains the lambda code) inside thecdk.outdirectory. If you used a faulty combination and deployed the stack, the lambda code asset itself will be corrupted, and any subsequent deployment will suffer from the same problem, regardless of if you removed your own asset. To address this, you need to delete all assets that are included in your stack:It’s also a good idea to just nuke the entire
cdk.outdirectory instead of specific assets inside it.I’ve created this reproduction repo that does all that for you and also allows specifying different versions of the relevant components. You can use it for sanity checks as well as plugging in your own stack.
You can do it:
brew install node@14 && brew link node@14 --overwriteDowngrading to node 14.15.4 and cleaning cdk.out and s3 zip files made the trick for me.
I had the same issue after upgrading to 15.6.0. Downgrading (and deleting cdk.out/.cache and deleting the zip files on the s3 bucket) worked!
@iliapolo thanks a lot for taking the time for making it sharable, I can confirm it did the trick for me, i.e.:
cdk.outcdk deployRunning into the same issue now with
aws_s3_deploymenton 1.84.0 and 1.85.0://EDIT:
I can see the asset was bundled locally (e.g.,
asset.90b...), however, when browsing to thes3://cdk-hnb659fds-assets-*-*/assets/prefix, I can see that the zip corresponding toasset.90bhas the correct files in it, but they are entirely empty.OS: Catalina
This ugly script fixes assets after deploy so you can deploy again. Run it in the root folder of your project so it can find
cdk.out. It iterates over assets listed inmanifest.json, downloads them from S3, extracts to a temporary folder, re-zips, and finally uploads back to S3.The fix will be available in the next release.
crc32-stream 4.0.2 fixed it for me with node 15.7.0. Thanks @cmckni3 and @katrinabrock !
Some heroes wear capes. Some heroes wear whatever this is:
Working now! ❤️
I’m using python flavor of CDK, I had to remove this line from @iliapolo 's run.sh https://github.com/iliapolo/aws-cdk-issue12536/blob/main/run.sh#L72
packages.json is this:
Of course, I also had to make sure the version of the python aws-cdk.* packages matches cdk version (
1.88.0if you use run.sh script as-is). I’m also still clearing out leftover docker images that CDK generates. Not sure if that is needed or not.I was also experiencing this issue but finally got things working again with NodeJS
12.20.1and CDK1.85.0. I suspect it should also work for the latest 14.x LTS.During initial troubleshooting, I tried several combinations of Node and CDK versions but no to avail, would always get the non-empty zip file error during deployment. I had been clearing out the
cdk.outfolder but I was NOT clearing the zip files in the CDK bootstrap bucket on S3. Once I did that, it worked.As may other commenters already pointed out, you NEED to clear BOTH your local
cdk.outAND the offending zip files in the CDK bootstrap S3 bucket on AWS (or just deleted everything in the bootstrap bucket which is what I did).It looks like the root issue is that the zip file names are simple hash checks and CDK is optimized to not bother reuploading the “same file” but “same” is detected just by filename, not by size or modification date. As such, once you have this issue, your bootstrap buckets are “poisoned” for lack of a better term and won’t work again unless you either A) clear the offending files in the S3 bucket, or just delete it all or B) modified your actual code enough to result in a new hash being generated (I suspect this could be a side effect for some troubleshooting).
For future reference, I’m adding my experience with this issue.
Configuration
I had a Gradle project for each Lambda handler and a Gradle project for CDK code, i.e.,
Each Lambda handler project used the Java application Gradle plugin, so when bundling locally, I had to use
since
BundlingOutput.ARCHIVEDrequires exactly one archive file in the directory (i.e., no.tararchive). Really, only thedistZipis necessary, but cleaning and running checks is a safe option.So what I would try to run is
cdk bootstrap, thencdk synth, and thencdk deploy. Bootstrapping and synthesizing would complete successfully, and I could verify that all handler ZIP archives were properly being picked up by CDK and hashed (usingAssetHashType.OUTPUT) tocdk.out. However, when I would runcdk deploy, I noticed that an empty ZIP file would appear incdk.out.The key observation that ultimately helped me was the following. Upon running
cdk synth, I confirmed that each handleraws:asset:pathmatched what was incdk.out. However, upon runningcdk deploy, the value ofaws:asset:path(located under “Metadata”) for all handlers changed to the hash of the empty ZIP archive. From what I could tell, it’s as if CDK didn’t recognize that the handlers were already bundled, and socdk deploy“bundled the bundled assets.”To fix the issue, I simply deleted
cdk.outafter runningcdk synthand then rancdk deploy. I haven’t confirmed, but I think runningcdk synth --no-stagingwould save you the trouble of deletingcdk.out.This is only the second project I have worked with CDK, so I may be missing some critical detail about how to properly use CDK or the CDK CLI that would’ve avoided this issue, but it didn’t seem to be caused by any partcular version of software I was using.
My currently working configuration is Node v14.16.1 (LTS) and CDK version 1.100.0.
As others have stated, clearing the staging bucket of affected assets is required to fix this problem. After becoming aware of this issue, running
cdk deploy ... -vvvindicated that CDK was detecting a pre-existing asset within the staging bucket. With this asset already uploaded, CDK skipped uploading the current copy (leaving a corrupted zip file in place) and proceeded to attempt to use the corrupted zip file in the following Lambda function deployment.My issue was resolved after updating to the previously mentioned Node and CDK versions, then removing the assets which were indicated in the verbose output produced by the
-vvvflag.@scubbo, are you using a docker container? I was running
cdkin thenode:latestcontainer and had to, in addition to settingarchiverto~5.1.0& deleting all objects in the S3 bucket, switch to usingnode:lts.Still no dice on my M1 Mac - thanks though! I’ll keep at it. ⛏️
EDIT: Success! It was an ID10T error on my part…can confirm that CDK
1.86.0works on an M1 Mac when limitingnode-archiveras suggested. 👍This seems to be caused by: https://github.com/archiverjs/node-archiver/issues/491, which identified the root cause as https://github.com/nodejs/node/issues/37027
Hi everyone!
Sorry about this 😕.
The issue was not automatically assigned to one of the core team members, hence the delay in response. If this ever happens, I asked that you kindly tag one of the core team on the issue, here is the list of GitHub logins: https://github.com/aws/aws-cdk/blob/4871f4a1503bc2d82440e204e1c5b05f2ef26b7b/.mergify.yml#L9
Im going to look into this now.
I am also facing this issue. I managed to make it work by clearing the asset in S3 and deleting cdk.out (not sure what .cache is) but it s failing again.
This issue seems to be fixed on latest version of node. I have tested this with node 15.12.0 “node -v – v15.12.0”. It is working so far for me.
Similar issue with BucketDeployment.
I’m using node 15.8.0:
% node -v v15.8.0CDK 1.89.0:
% cdk --version 1.89.0 (build df7253c)@cmckni3 In general I’d expect developers to prefer the advertised “Current” version (new features, hopefully less bugs, etc). Also, on macOS when installing with Homebrew (
brew install node) you get the latest Node.js version. LTS option seems to be more appealing for system administrators, where stability comes before feature richness, etc.The same here with
Issue occurred on CDK 1.74.0 again with node-archiver 5.1.0.
Trying with archiver 5.0.2.
I don’t know if I can be useful, but I’m sharing my experience with this issue.
I was using latest
nodeversion and I encountered this issues. I rolled back to thenode ltsusingnvm, removedcdk.out, removed cdk bootstrap folder content ons3and then it started to work again even with the latest CDK version@nija-at Just in case you missed it, the root cause is https://github.com/nodejs/node/issues/37027.
And a small nit regarding the changed title: The generated zip files are not actually empty, just corrupt (bad CRC). It’s simply the error message displayed by CDK that reads “Uploaded Zips of Functions are Empty”.
I have downgraded all
aws-cdkpackages to1.74.0, setarchiverto~5.1.0as cmckni3 suggested, deleted uploaded s3 files, runrm -rf cdk.out/*, and runnpm install- and I’m still getting this same issue (confirmed that the uploaded files are non-empty, and I get a “bad CRC” error when unzipping).I’ve also tried all of the above except downgrading from
1.86.0to1.74.0- same error.Mac, 16-inch 2019, OS 10.15.7
For now, I added a version resolution to my
package.json. Yarn locked the offending version of node-archiver,archiver, even after downgrading AWS CDK.package.json:
I mean archiver 5.2
I can confirm what @duntonr wrote. After deleting the zip files from the bucket and deleting the
cdk.outdirectory, deployment works using NodeJS 14.15.4.Having the same issue on a new M1 MacBook Air. Older versions (<15.x) of Node don’t work on the new Macs and v15.6.0 is giving me this error when doing
cdk deployEDIT: I was able to get it working. Steps below…
nvmfor version managementRosetta iTermand update toOpen using RosettaRosetta iTerm, and runnvm install v12.12.0Rosetta iTermto docdk deploycommandsThanks @machielg. I downgraded to Node v14.15.4 and to node-archiver 5.1.0. So far it is looking OK now.