prisma: Prisma generate randomly fails on Ubuntu due to missing internal .so `libquery_engine-debian-openssl-1.1.x.so.node`
Bug description
Here is my package.json in the root of my repo:
{
"devDependencies": {
"@prisma/client": "4.13.0",
"pnpm": "8.4.0",
"prettier": "2.8.8",
"prisma": "4.13.0",
"prisma-json-types-generator": "2.3.1",
"ts-node": "10.9.1",
"typescript": "5.0.4",
"turbo": "1.9.3",
"vercel": "29.0.0"
},
"scripts": {
"postinstall": "pnpm run -r --parallel prisma-generate"
}
}
I have two database packages with prisma-generate
scripts that look like so:
"prisma-generate": "prisma generate --schema=./prisma/schema.prisma"
In my GitHub actions, when I run pnpm install
, I randomly (and only sometimes) get the following error:
. postinstall$ pnpm run -r --parallel prisma-generate
. postinstall: Scope: 26 of 27 workspace projects
. postinstall: packages/ec-database prisma-generate$ prisma generate --schema=./prisma/schema.prisma
. postinstall: packages/pl-database prisma-generate$ prisma generate --schema=./prisma/schema.prisma
. postinstall: packages/ec-database prisma-generate: Prisma schema loaded from prisma/schema.prisma
. postinstall: packages/pl-database prisma-generate: Prisma schema loaded from prisma/schema.prisma
. postinstall: packages/pl-database prisma-generate: Error: ENOENT: no such file or directory, stat '/home/runner/work/Ecominate/Ecominate/node_modules/.pnpm/prisma@4.13.0/node_modules/prisma/libquery_engine-debian-openssl-1.1.x.so.node'
. postinstall: packages/pl-database prisma-generate: Failed
. postinstall: undefined
. postinstall: /home/runner/work/Ecominate/Ecominate/packages/pl-database:
. postinstall: ERR_PNPM_RECURSIVE_RUN_FIRST_FAIL @ecominate/pl-database@1.0.0 prisma-generate: `prisma generate --schema=./prisma/schema.prisma`
. postinstall: Exit status 1
. postinstall: Failed
ELIFECYCLE Command failed with exit code 1.
How to reproduce
See above. Can’t consistently reproduce
Expected behavior
No errors
Prisma information
See above
Environment & setup
- OS:
runs-on: ubuntu-latest
from my GH action - Database: PostgreSQL
- Node.js version:
18.12.1
Prisma Version
4.13.0
About this issue
- Original URL
- State: open
- Created a year ago
- Reactions: 12
- Comments: 41 (25 by maintainers)
Commits related to this issue
- Retry prisma generate to improve random CI errors See prisma/prisma#19124 — committed to GW2Treasures/gw2treasures.com by darthmaim a year ago
- fix(client_tests): Run the migrate.generation test as serial (#1141) There have been some instances of the CI for the client tests failing in the new migrate.generation.test.ts file. https://github... — committed to electric-sql/electric by davidmartos96 3 months ago
We are on version 5.5.2, we run multiple generate in parallel with a command
yarn workspaces foreach --parallel prisma generate
and the CI on GitHub is still flaky and fails regularly with a similar error:Error: ENOENT: no such file or directory, stat '/home/runner/work/spendesk/spendesk/node_modules/prisma/libquery_engine-debian-openssl-1.1.x.so.node'
.We managed to pinpoint the issue to a possible race condition when running multiple
prisma generate
commands in parallel. SettingDEBUG='prisma*'
revealed that bothschema-1
andschema-2
try to download and write to the same file at the same time.(Note: this only occurs if the engine binaries do not exist and need to be downloaded)
Changing our script to run
prisma generate
for each schema sequentially instead of in parallel seems to reliably fix the issue.Note: This is potentially leading to problems on Ubuntu that we are currently investigating. We might need to revert. @Jolg42 is on this.
Thanks, I ran our CI three times with the new release and it stays green!
@lukahartwig & others: You can now try out this
5.5.0-dev.45
dev
version. Likenpm install --dave-dev prisma@5.5.0-dev.45 && npm install @prisma/client@5.5.0-dev.45
Let us know if that makes things better 🙌🏼
Note that the official release, 5.5.0 is planned for October 24th.
now the bug fairy just needs to put a PR under my pillow 😃
Just got the error again, here are the debug logs:
This was the first time after maybe 30 - 40 CI runs, it still feels like enabling the debug output improved things…
This issue is really becoming painful for our team. Ruins most CI runs
winrar
I just tested wrapping the line in an
os.platform() === "darwin"
check and ran our CI a couple of times. This seems to work. Should I create a PR?The problem happens on Ubuntu on GitHub Actions for us. We don’t usually run Prisma in parallel locally so making an exception for non-Mac platforms would improve our situation.
I ran into this issue as well and I think the issue is in this line.
We are running multiple
prisma generate
processes in a monorepo using pnpm workspaces and turborepo. Setting the concurrency to 1 in turborepo resolves the issues.It looks like the prisma cli deletes and replaces files but not in a concurrency safe way. So other processes that expect those files to be there fail on random fs calls.
Got the other error case as well now (by running
while clear && find . -type d -name "node_modules" -exec rm -rf {} + && pnpm install; do :; done
in my reproduction repository until I got a different error):I think that covers all errors that have been mentioned in this issue.
I know, that was meant about how to reproduce the error - try again until it happens in the reproduction repository. It does not happen that often to me.
I did not run into this issue over the last 2 weeks since making sure that
prisma generate
is not run in parallel anymore.I’ve setup my generate tasks as dependencies in turborepo, so they should build in sequence. I will report back if the issue is gone over the next few days…
Perhaps, but @janpio we use turborepo which implicitly paralellizes actions. would have to do some acrobatics to set up everything sequentially
@Jolg42 its very flakey. As @darthmaim mentioned, I get this sometimes 5 times in a row. Sometimes 20 times without occurring.
I’ve added
DEBUG=prisma*
to my CI script.I restarted the job 10 times and didn’t get a single error, usually I only get 3 runs or so in a row without the error… So either I’m really lucky (or unlucky), or the additional debug logs somehow prevent a race condition. I’ve hit the error multiple earlier today before adding the debug logs.
I will report back when the error comes back with some additional logs