serverless: AWS - Incorrect path seperate used when defining package path.
This is a Bug Report
Description
Incorrect path separator used when determining where the deployment artefact is located in AWS deployment.
-
What went wrong? When setting the package path the wrong path seperate was being used for MacOS, resulting in the incorrect path being used to upload artefacts to S3.
-
What did you expect should have happened? Correct path seperate to be used based upon OS.
-
What was the config you used? Standard
-
What stacktrace or error message from your provider did you see?
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Error --------------------------------------------------
ENOENT: no such file or directory, stat './server/.serverless/.serverless/fishcare-cms-server.zip'
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information -----------------------------
OS: darwin
Node Version: 9.3.0
Serverless Version: 1.39.0
Solution
Change:
- File: serverless/lib/plugins/aws/package/lib/saveServiceState.js
- Line: 35
- To:
packageRef.artifact.substr(packageRef.artifact.lastIndexOf(path.sep) + 1)
I would make the change myself but haven’t contributed before and don’t have time right now to meet contribution requirements.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 13
- Comments: 17 (1 by maintainers)
Somehow this one has resurfaced in the latest versions 1.49 and 1.50, I am still on my way to test which one works. I rolled back to 1.38.0, but a more thorough investigation would be better, I think.
I will fix, PR coming up.
in my case the problem is
Chiming in to say this is happening with 1.69.0
I’m still having this issue with version 1.66. I’m happy to roll back for now but I worry that is not sustainable, does anyone know a solution or work around?
Got the same issue, had to rollback to 1.38.0