prisma: Error: Failed to convert rust `String` into napi `string`

Bug description

I’m trying to query a big amount of data that fails, because of some unknown error.

I was only able to find the issue #13192 to be related on this matter.

Full error message:

"stack": "Error: Failed to convert rust `String` into napi `string`
 at RequestHandler.request (/app/node_modules/@prisma/client/runtime/index.js:49022:15)
 at async /app/build/services/data.service.js:8:20
 at async PrismaClient._request (/app/node_modules/@prisma/client/runtime/index.js:49919:18)
 at async getData (/app/build/services/data.service.js:16:25)
 at async basicStatisticsController (/app/build/controllers/statistics/index.js:16:48)"

How to reproduce

I’m not exactly sure, the error occurs only when I’m querying data that is huge in size, aka somewhere between ~500 MB

Expected behavior

No response

Prisma information

query looks like this:

    await prisma.table.findMany({
      where: {
        dateTime: {
          gte: date.start,
          lte: date.end,
        },
      },
    });
    
    // Same error when using the queryRaw like this:
    await prisma.$queryRaw`
      SELECT * FROM "Table"
      WHERE "dateTime" BETWEEN ${date.start} AND ${date.end}
    `;

Environment & setup

  • OS: Windows/Linux
  • Database: PostgreSQL
  • Node.js version: 16.3
  • DB Hosted on: AWS Aurora

Prisma Version

Happens on 3.15.2 as well as 3.14

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 33
  • Comments: 64 (10 by maintainers)

Most upvoted comments

Is there any work around this?

Query goes fine with a small set but throw this error when I remote the take: 10 from the Where.

What worked for me was being a better software engineer 😅

Instead of one giant findMany calls, I made some changes to do small, but frequent calls instead with the same desired effect.

Alright, i coded a reproduction example. You can find it at https://github.com/Quesstor/prisma-error

I had to make a few test to boil the problem down. The JSON contains a very long string property (>500kb) and you have to select a certain amount of rows (in my case it worked with 1066 rows and breaks with the 1067th one).

You can run the example by cloning the repo and simply run docker compose up. The docker compose includes the postgres DB and the example app.

@janpio @Jolg42 I’m just curios, why there is a critical bug open for 1,5 years?

I’m experiencing this issue when querying a large amount of records with lots of data, dialing back the batch size is a workaround for me but not viable long-term. watching for a solution

Bump, experiencing this issue on a table with ~200k rows.

We are going to try and refactor the query into chunks to see if that fixes the issue.

Specific Prisma error from our lambda logs:

Invalid `s.prisma.$transaction([s.prisma.account.aggregate()` invocation in
/var/task/src/components/xxx/routes.js:22:34103

  19         WHERE "Account".id = ${e}
  20         GROUP BY ("X".id, "Account".id)
  21     ) as overview;
→ 22 `))[0],t.getAll=async e=>{var t;const n=await s.prisma.$transaction([s.prisma.account.aggregate(
Failed to convert rust `String` into napi `string`
    at Pn.handleRequestError (/var/task/node_modules/@prisma/client/runtime/library.js:171:6929)
    at Pn.handleAndLogRequestError (/var/task/node_modules/@prisma/client/runtime/library.js:171:6358)
    at Pn.request (/var/task/node_modules/@prisma/client/runtime/library.js:171:6237)
    at async t.ourLambdaHandlerName (/var/task/src/components/xxx)
    at async t.ourLambdaHandlerName (/var/task/src/components/xxx)
    at async runRequest (/var/task/node_modules/@middy/core/index.cjs:124:32) {
  code: 'GenericFailure',
  clientVersion: '4.14.0',
  meta: undefined
}

Alright, i coded a reproduction example. You can find it at https://github.com/Quesstor/prisma-error

I had to make a few test to boil the problem down. The JSON contains a very long string property (>500kb) and you have to select a certain amount of rows (in my case it worked with 1066 rows and breaks with the 1067th one).

You can run the example by cloning the repo and simply run docker compose up. The docker compose includes the postgres DB and the example app.

Updated it in the repo, keeps happening in version 5.5.2 Query is simply await DB.document.findMany({ take: 1067 })

@sunneydev ooh, could you elaborate please? I’ll try with a few different chunk sizes. Would be helpful to know since the patch right now is to run the operation in batches, so this info would at least help with knowing what a safe batch size is.

There’s a hard-coded string size limit in the V8 engine (JS engine that Node.js relies on). My thoughts are it might be related to that. It’s somewhere between 256 MB - 1 GB. I don’t know exactly.

This stack overflow issue might be useful

@sunneydev don’t embarrass yourself

@slavaGanzin I think it’s because the error comes from a bad practice anyway. I don’t think you should be inserting a lot of rows at once.

We’re experiencing this as well with a complex query (with many includes) on a table with ~22K records (not that big) using latest prisma@5.7.1 on a Postgres 16.0 database.

prisma:client:request_handler  [Error: Failed to convert rust `String` into napi `string`] {
  code: 'GenericFailure'
} +0ms

It’s a cron-like job that started to fail suddenly so we might have reach some kind of hard cap somewhere.

@tncn1122 @kulsbamby @lucas-coelho @alessandrofc @MrVhek @tadeumx1 Please add more information to your comments to make them more useful. In short, please don’t bump the issue if you have no information to share, you can use the 👍🏼 on the issue at the top to signal the same.

I’m definitely interested to know if this is happening with recent 5.X version of Prisma, like 5.5.2.

Example:

  • Add the Prisma packages versions you are using -> you can post the output of npx prisma version.
  • Add the Prisma Client query that triggered this
  • And more information is always welcome.

Thank you 💚

Hmm, the same problem here

I have also ran into this issue. In my case I noticed that a combination of two factors will trigger the issue:

  • A large amount of data, combined with
  • Circular joins ( user > category > types > user )

Is there any work around this? Query goes fine with a small set but throw this error when I remote the take: 10 from the Where.

What worked for me was being a better software engineer 😅

Instead of one giant findMany calls, I made some changes to do small, but frequent calls instead with the same desired effect.

agreed @thugzook haha

After selecting only the useful columns it worked here also. 😃

Running on a 32GB machine w/ 10GB page size on Node.js.

Same error for me. Was fine with ~170k entries, but I’m running into issues past that.

From your description I would assume it is mostly caused by the switch in versions between the branches. My guess would be that there is a rogue Prisma Client somewhere maybe in your node_modules or so.

It is important that you do not delete the node_modules folder of the original project that has this problem.

Can you maybe check out the same project in another folder somewhere else, run yarn install and then compare the new checkout with the old one, including all the files in node_modules? Maybe we can discover the difference this way.

@janpio I’m tying to reproduce the error in simple project, but It does not work… but I can still reproduce it in my actual project easily. so I’ll keep trying to make the simple reproduce project.

current my founded reproducing work in my actual project is below.

[env]
MacBookAir M1
VS Code with dev container

[node & prisma]
branch A: node 14.20.1 & prisma 4.1.0
branch B: node 16.17.1 & prisma 4.4.0

[sequence]
1. switch "B" build dev-container & "yarn install --force"
2. "prisma migrate reset" (with seed)
3. switch "A" & "yarn install"
4. rebuild dev-container 
5. and then "prisma migrate reset" (with seed)

I hope this gives you some insight.

Im using prisma 4.1.1 and postgreSQL 12.12 i have a lot of entries in one of my tables and i get this error when i try to findMany on it

Hey @sunney-x, are you able to reliably reproduce this error?

Yes, If my device does not have a lot of RAM I get the error. I’m sure it’s some kind of memory leak, there’s no way the query could use 8 gigabytes of RAM.

Also, if you queried less, like you sliced your data into portions, do you encounter this error in one of the portions?

No, the issue only occurs if it’s a lot of data in a single query.

I can also add that I’m experiencing very high memory usages when this error occurs. image