prisma: Intermittent failures of CLI commands for large schemas (M2 Mac - 4.10.1)

Bug description

When running prisma CLI commands via an npm script, I receive intermittent fatal errors:

> prisma migrate deploy

assertion failed [block != nullptr]: BasicBlock requested for unrecognized address
(BuilderBase.h:550 block_for_offset)
 sh: line 1: 61824 Abort trap: 6           npm run prisma:migrate

or

> prisma generate

Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
assertion failed [block != nullptr]: BasicBlock requested for unrecognized address
(BuilderBase.h:550 block_for_offset)
 sh: line 1: 62099 Abort trap: 6           npm run prisma:generate

If I rm -rf the client directory and downgrade back to 4.9.0, they stop occurring.

How to reproduce

  • On an M2 Macbook Pro
  • Starting with the https://github.com/prisma/prisma-examples/tree/latest/typescript/rest-nextjs-api-routes example project
  • I copied over the existing schema from the company project (sorry - proprietary but I can describe it qualities below)
  • Confirmed it failed
  • Simplified the schema
  • Confirmed it failed a lower percentage of the time

It seems to be related to:

  1. The use of extendedWhereUnique (#15837) - disabling this dropped the failure rate dramatically
  2. The number of models - the schema I’m using is 1000+ lines long and includes 54 interrelated models. Dropping it even just to 50 models cause the failure rate to fall from ~90% to ~50%.
  3. Prisma 4.10 - reverting to Prisma 4.9 (and deleting the codegen’d client, since that contains the new apple chip Rust binary) caused the failure rate to return to zero.

Expected behavior

I would expect the command to predictably succeed or fail.

I would expect the command to succeed if there are no errors in the schema.

Prisma information


generator client {
  provider        = "prisma-client-js"
  previewFeatures = ["extendedWhereUnique"]
}

datasource db {
  provider = "postgresql"
  url      = "postgresql://postgres:password@127.0.0.1:5434/addition_wealth_test"
}

// Apologies but I cannot share the full prisma.schema - about 1k lines of company code follows

Environment & setup

  • OS: Mac OS Ventura 13.2
  • Database: PostgresQL
  • Node.js version: 18.13.0

Prisma Version

Environment variables loaded from .env
prisma                  : 4.10.1
@prisma/client          : 4.10.1
Current platform        : darwin
Query Engine (Node-API) : libquery-engine aead147aa326ccb985dcfed5b065b4fdabd44b19 (at node_modules/@prisma/engines/libquery_engine-darwin.dylib.node)
Migration Engine        : migration-engine-cli aead147aa326ccb985dcfed5b065b4fdabd44b19 (at node_modules/@prisma/engines/migration-engine-darwin)
Format Wasm             : @prisma/prisma-fmt-wasm 4.10.1-1.80b351cc7c06d352abe81be19b8a89e9c6b7c110
Default Engines Hash    : aead147aa326ccb985dcfed5b065b4fdabd44b19
Studio                  : 0.481.0
Preview Features        : extendedWhereUnique

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Reactions: 20
  • Comments: 39 (7 by maintainers)

Most upvoted comments

I THINK I SOLVED IT!

Looking through the issues you posted above @janpio, I noticed that the Quarto issue referenced having the Rosetta x64 -> aarch64 translation layer inside of your build stack seemed to be related to the crash.

I also looked at Apple’s Console.app and noticed that it had been capturing all of the crashes and, according to it, libRosetta was a parent task for the crash, which implied that something in my stack was still in x64.

I ran node -e 'console.log(process.arch)' and confirmed that node itself was running in an x64 environment! (On an M1/2 chip it should print out arm64.)

I wiped out node from my system (remove from brew, remove manually installed, remove installed via nvm) and reinstalled (in my case from NVM) being sure that it selected the arm64 binary. I then wiped out node_modules so that Prisma (and all other modules with compiled engines) would need to redownload the right engine. One quick reinstall later, I have been able to run the problematic calls ~20 times in a row without a crash. As an added bonus, they’re also about 40% faster now!

I’ll leave this open in case there’s something you want to investigate still, but on my end I think the issue is resolved.

I’m having this same issue in a monorepo, but only sometimes when running prisma generate. I have 15 models. There’s about a 50/50 chance of it occurring for me.

Prisma: v4.10.1 (Also on a M2 Mac) MacOS: Ventura 13.2 NodeJS: v18.14.0

Hi guys, I resolved this problem with re-installing arm64 node via nodenv! Thank you ❤️

# Change my terminal to arm64
$ arch -arm64e /bin/zsh  
$ uname -mp
arm64 arm

# Re-install my node on darwin-arm64 mode
$ nodenv uninstall 18.15.0
$ nodenv install 18.15.0
Downloading node-v18.15.0-darwin-arm64.tar.gz...
...

# Check node arch
$ node -e 'console.log(process.arch)'
arm64

# Re-install node_modules
$ rm -rf ./node_modules
$ npm i

# After that my Prisma commands succeeds 100%!

You can fix this issue when using Docker on Apple Silicon disabling this feature in Docker Desktop Screenshot 2023-07-20 at 13 02 55

Migrated from a previous Intel mac with homebrew -> fish -> asdf -> node and was running into this issue. Going and reinstalling everything to be arm64 solved the issue.

I don’t think this is necessarily the responsibility of prisma, just a gotcha for M1/M2 mac owners that upgrade. The error message is particularly unfriendly, but the solution is to make sure this comes up in search and then people will discover the need to fix their node install (with dependencies).

This issue is happening to me when I’m trying to cross-compile a Dockerfile that builds a Node project with Prisma (Next.js). It seems like the problem is really Prisma’s problem - cross-compilation on M1 with Rosetta for x64 fails consistently.

image

I also had this issue on an M2 mac. The problem was that node was on x64. If you install again via npm it should be on arch64.

Check with this command:

node -e 'console.log(process.arch)'

Another way I think this may happen is if you somehow have two of the same node versions installed (which is somehow the case for me).

iTerm2 output

$ ~: node -v
v18.14.0
$ ~: node -e 'console.log(process.arch)'
arm64

Output in VSCode in my project

$ kiai: node -v
v18.14.0
$ kiai: node -e 'console.log(process.arch)'
x64

I ran node -e ‘console.log(process.arch)’ and confirmed that node itself was running in an x64 environment! (On an M1/2 chip it should print out arm64.)

Big thanks for the diagnosis @zackdotcomputer - gave me the immediate solution after I hit this while updating from Node 16 -> 18. Turns out the root cause was I somehow had the x86 version of VSCode installed on my M1 Mac 🤦‍♂️ - which in turn led to having the x86 version of node installed when I used nvm within a VSCode terminal.

Yeah agreed that this is not really Prisma’s bug - it’s a user confusion issue or an Apple migration issue, combined with an unfortunate interaction between Rust and the x64 Rosetta emulator. If you can think of a way to easily notify the user of this, then I think being able to intercept and warn about it would be helpful education for users. My first-thought ideas for how to detect this would be:

  1. process.arch not equal to the return value from the command arch
  2. Sysctl machdep.cpu.brand_string contains Apple but process.arch is not arm64

Ultimately, though, I suspect that finding this thread via Google will help educate as well, and so the impact and lift of making this change is probably pretty low. If you want to mark this as resolved, I think that would be a reasonable prioritization.

@janpio I was able to go back to having the issue using the following steps, which someone on the Prisma team could use to reproduce:

  1. Run arch -x86_64 bash to switch to bash in x64 mode.
  2. Install Node in this shell using nvm. It needs to be a different version than is already installed on your system, in my case I am using 18.14 normally, so I ran nvm install 18.13. By running this in the x64 shell, nvm will detect that and install an x64 version of node.
  3. You can now close the x64 terminal. Switching to your “tainted” version of node in nvm will run in x64 mode no matter where you run it. You can verify this by switching to it and running the above arch-print command.
  4. Delete your node_modules folder and reinstall while using tainted node. This is to clear out any arm64 binaries Prisma has downloaded.
  5. Make a package.json script to run prisma validate so you’re sure to use the local copy of prisma and don’t get tripped up by a global or npx cached copy of it.
  6. Repeatedly spam validate until it fails. For my ~1k line schema with several experimental features enabled, it fails about 1 in 4 times.

Or, as @net-tech has shown, apparently if you install the universal binary from Node’s website, it will just reflect the environment of the host program. This might allow you to switch into x64 node just using the arch command above but I haven’t tested that.

+1 I’m running into this issue building a docker image on an M1 Mac using --platform=linux/amd64, happens about 50% of the time with or without Rosetta enabled.

@rskvazh Did you find any solution or workaround? Having the same problem on M1 when building docker images with Use Rosetta for x86/amd64 emulation on Apple Silicon and passing the --platform linux/amd64 flag to docker build command. We recently upgraded our docker images from FROM node:14-slim to FROM node:18-slim and upgraded prisma from 4.9.0 to 4.15.0. Calls to prisma generate randomly fail (at least 1 out of 4 calls). Running node -e 'console.log(process.arch)' inside the docker image correctly outputs x64 and prisma itself seems to have the correct binaries installed.

libquery_engine-debian-openssl-3.0.x.so.node
migration-engine-debian-openssl-3.0.x

@victorhs98 for some reason it does not work for me when using 19.7.0

Had same issua on an m1 mac, fixed it by changing node version to 19.7.0. next time i run it, it auto downloaded arm specific version of some files.

Ok diving deeper - sorry for multiple posts, but figured that was the best way to catalog what I find for the team - I was able to use the above stack traces and some console debug logs to pinpoint the last TS line touched before the crashes. In both cases it was a call to prismaFmt. In one case, this call and in another case this one. In the compiled code, you can even follow these a couple steps deeper into the JS bootstrap code and find the precise line that is crashing is the call to the WasmInstance object for get_dmmf or get_config (or some other rare third crash I haven’t pinpointed yet because it doesn’t have any meaningful debug logs).

So with high confidence I can guess that something in the instantiation or the running of the WASM instance is causing a crash here. However, we’re at the edge of my ability to dive deeper, so I’ll have to hand this off to you @janpio.

If you can track down the underlying issue (which, I assume, is actually with Rust or WasmBindgen) and fix with the providing company, then that is great! If not, then perhaps we could at least get a quick fix to disable use of the WASM binaries for the getDmmf and getConfig steps on M1/M2 chips for now?

@janpio here’s the full-printout from that run:

> addition-api@0.1.0 prisma:generate
> prisma generate

  prisma:engines  binaries to download libquery-engine, migration-engine +0ms
  prisma:loadEnv  project root found at /Users/zack/code/addition/api/package.json +0ms
  prisma:tryLoadEnv  Environment variables loaded from /Users/zack/code/addition/api/.env +0ms
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
  prisma:getConfig  Using getConfig Wasm +0ms
assertion failed [block != nullptr]: BasicBlock requested for unrecognized address
(BuilderBase.h:550 block_for_offset)
 [1]    84017 abort      npm run prisma:generate