objection.js: MSSQL: joinEager within transactions fails because there is a request in progress
Hello! Recently, knex moved to use mssql v4 which dropped support for queueing queries within transactions (and to me, and it honestly looks like they aren’t willing to recreate it, either). This change means that you should perform only sequential operations within them. Now, onto the issue with Objection.js:
Here we have a explicit Promise.all
which does things concurrently.
Now, we have two ways of solving this:
- Everywhere we use
Promise.all
, we replace with a series implementation (bluebird
made this quite counterintuitive…):
return Promise.reduce(allModelClasses, function (_, ModelClass) {
const table = builder.tableNameFor(ModelClass);
if (columnInfo[table]) {
return columnInfo[table];
} else {
columnInfo[table] = ModelClass.query()
.childQueryOf(builder)
.columnInfo()
.then(info => {
const result = {
columns: Object.keys(info)
};
columnInfo[table] = result;
return result;
});
return columnInfo[table];
}
}, null);
(the code above works, can PR if needed, I currently “solve” the problem by either patching this code or using another eager algorithm)
- Or, in this specific case, it’s possible to cache everything beforehand (by calling it outside a transaction), perhaps through a static method (I didn’t see an easy way of doing this externally, so I didn’t try it).
Now, I haven’t looked much, but I suspect Promise.all
is part of another feature which I’m not currently using which is graph inserts, which will probably suffer from the same problem with this transaction/mssql combo.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 18 (3 by maintainers)
@mastermatt Yes, internal concurrency is disabled for now, but does that really matter? It increases the latency a little bit when your server is seeing little to no traffic, but with more traffic it should actually even out latency between similar requests because the connection pool isn’t hogged by a single request doing a huge eager query. If you need the minimum latency and have relatively little traffic, you can get some speedup by setting
concurrency
to a bigger number.@danigb I can’t reproduce that. Could you write a simple script I can run that reproduces the issue?
Unfortunately I have run into exactly this, using a transaction with an insert graph, and all of the models having various hooks that require different things being patched on after I get the inserts back (because I need the ids…)
Is there an alternative solution that I might be able to hack together? The only thing I can think of at the moment is nix’ing all of my query hooks, and then when i get the returned graph back, running all of those queries in a series… it’s really inconvenient and messy tho 😦