prisma: SQL Server: `The incoming request has too many parameters. The server supports a maximum of 2100 parameters. Reduce the number of parameters and resend the request.`
Bug description
When requesting a model including the relations to another model, and when more than 2100 relations exist, SQL Server throws an error:
Error occurred during query execution:
ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(TokenError { code: 8003, state: 1, class: 16, message: "The incoming request has too many parameters. The server supports a maximum of 2100 parameters. Reduce the number of parameters and resend the request.", server: "127c67c19c3c", procedure: "", line: 1 }) })
Even when I try to batch fetch them, with skip and take, no matter how small the batch, I get this error.
It seems that under the hood, Prisma translates these relationship queries to a query with thousands of query parameters. I think this should be handled more carefully by the client, because we can’t control how Prisma builds the query, and apparently there is no failsafe in this case when using SQL Server.
How to reproduce
- Have two models, AddOn and Door
- Have a relation from AddOn to Door, called
compatibleAddOns - Make 10.000 relations from a single AddOn to 10.000 different Doors
- Query a single AddOn:
prismaClient.addOn
.findUnique({
where: {
productId,
},
select: {
compatibleDoors: true
},
})
Expected behavior
Return an AddOn with 10.000 compatibleDoors.
Prisma information
model Door {
id String @id @default(uuid())
compatibleAddOns AddOn[]
...
}
model AddOn {
id String @id @default(uuid())
productId String @unique
compatibleDoors Door[]
...
}
for (let batch = 0; batch < existingRelationshipsCount._count.compatibleDoors; batch += batchSize) {
const existingRelationships = await prismaClient.addOn
.findUnique({
where: {
productId: addOnProductId,
},
select: {
compatibleDoors: {
take: batchSize,
skip: batch,
},
},
})
.catch((e) => {
console.log('compatibleDoorCount for ERROR:', existingRelationshipsCount._count.compatibleDoors);
console.log(e);
});
...
}
Environment & setup
- OS: macOS
- Database: SQL Server
- Node.js version: 16.14.0
Prisma Version
4.8.0
About this issue
- Original URL
- State: open
- Created a year ago
- Reactions: 4
- Comments: 24 (6 by maintainers)
I believe this issue is due to the
default_batch_sizeof 2099. I’m no expert in SQL Server internals, but according to this SO answer, the true limit should be 2098.It’s possible to use the environment variable
QUERY_BATCH_SIZE=2098as a workaround.Wanted to flag that this issue is also happening in
groupBy.This worked for me, thank you @dmitri-gb.
Hi @janpio, I was able to reproduce this error in isolation. You can reproduce this yourself by cloning and running the project within the following repo:
https://github.com/JaapWeijland/prisma-mssql-issue
The case is a bit different than I initially explained, but I was able to reproduce the same error using a smaller conceptual domain nonetheless.
Happy to help. If there are any questions, please let me know.
Posting this without a reproducible example at hand to offer (apologies), but we did run into a situation today where keeping our batch size at
2050works generally for most queries, but usingincludeon a typical order headers/lines table that had a FK set up did not work and threw errors. We followed the advice to reduce our query size even lower, seems like 1000 finally allows the query to be run, however, curiously when the query runs on a quiet dev DB with no activity aside from my application making this one query against it -> it goes intostatus: stalledimmediately and never recovers/hangs for what seems like forever? (I lost patience after 10 minutes so can’t actually confirm if it ever really will recover 🤷♂️)Checking out the process log there are no other blocking queries or activities as I’m the only activity on the DB. For reference if it matters the header table has 12,533,371 rows, the lines has 46,731,122.
(Edit: I should also note we aren’t pulling back that many rows, just wanted to note the table size as maybe it’s trying to search in an inefficient way? The actual result set we’re pulling using a raw query is ~3,700 records as it’s filtered on a few different statuses/etc.)
Sadly it seems like the workaround is only a partial workaround 🥲 So it’s back to writing some ugly code to split things up and do it manually on our end😅
Hope we have made progress/are closer to a resolve on this one! 🙏🕯️
This is definitely a good step and I’m sure it’ll fix most cases! Very happy to see this getting some action.
I’ve seen other ORMs do this without parameterizing the “join” field (just a comma-separated list) but this leaves you open to sql injection.
There are two ways I could see to solve this:
1. Keep query batching but limit batch size to fix the parameter issue
you’d have a MAX_BATCH_SIZE constant instead of QUERY_BATCH_SIZE, default 2098 Then you would work out the number of additional parameters (in the example above, 1 for color) and subtract that from the max. For the query you would use:
batchSize = MAX_BATCH_SIZE - numAdditionalParameters2. Do away with query batching and use temp tables to temporarily store join fields - my personal dream
I would guess this is a much more performant option for large datasets and it’s injection safe Downsides are write operations and more complexity.
Say you have a prisma query:
you could aim for sql like this. Person query:
Email query:
multi-column relations and temp table indexing are important considerations for this approach
Setting the batch size to 2098 will work in some cases, but does not solve the root problem. This is because it is possible to add more parameters to the query.
A fixed batch size is a fundamentally flawed approach, and the batch size should instead be dynamically determined based on the number of other parameters.
For example, if batch size is 2098 and you wanted to get all compatible red doors:
For your ‘compatible doors’ batched query, you’ll end up with sql something like this:
Here you have sp_executesql with 2101 parameters:
As a workaround I’ve set QUERY_BATCH_SIZE=2000 in my .env
Any progress with this one? I have the exact same problem. Tried to workaround it by fetching the relevant IDs of a single table, and manually chunking queries to the relating tables. Seems kinda anti pattern… any suggestions perhaps?
AFAIK the issue is not unique to MSSQL, although the parameter number message error probably is. Prisma is not translating the query code into a SQL JOIN query, but it is making multiple cascading queries and finally joining results in code within the driver: one can easily see this by activating query logging in the client:
new PrismaClient({ log: ['query'] })This behavior is far from optimal in any situation where the database is non trivial:
It usually ends up in unexpected errors when the system is live and the data grow over a certain point: I’ve seen this happening in both MySql and MSSql (not because other databases are immune, rather because these are the ones I routinely use).
While I firmly believe that Prisma is generally superior, in this one aspect it is still trailing way behind Sequelize whose join queries are excellent.
The workaround I usually adopt for the typical application where you have a list of rows and then you do CRUD operations on the single record, is to create views for the lists, add them as entities to schema and perform the
findManyoperation on them. Pros:findManyinstructions become extremely simple to write and testCon:
I believe this is a good compromise, but I’d really like to know when the Prisma team plans to fix this problem.
Amazing @JaapWeijland, I can reproduce with this: