sequelize: SequelizeConnectionAcquireTimeoutError randomly DB connections pool error
I’m running a very simple app that using sequelize basics, it was working for the recent year without any issues on sequelize v4. Recently, the app is crashing every day once a day, with the following error:
TimeoutError: ResourceRequest timed out
at ResourceRequest._fireTimeout (/usr/src/app/node_modules/generic-pool/lib/ResourceRequest.js:62:17)
at Timeout.bound (/usr/src/app/node_modules/generic-pool/lib/ResourceRequest.js:8:15)
at ontimeout (timers.js:498:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:290:5)
It was happening with sequelize 4.42.0, mysql2 v1.6.5, node carbon alpine. After reading a lot about this issue, I figured out that sequelize v5 might handle the connections pool better, so I’ve tried sequelize 5.7.6, mariadb v2.0.3, but nothing, now the error that I’m getting is:
SequelizeConnectionAcquireTimeoutError: Operation timeout
at pool.acquire.catch.err (/usr/src/app/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:281:53)
at tryCatcher (/usr/src/app/node_modules/bluebird/js/release/util.js:16:23)
at /usr/src/app/node_modules/bluebird/js/release/catch_filter.js:17:41
at tryCatcher (/usr/src/app/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/usr/src/app/node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (/usr/src/app/node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromise0 (/usr/src/app/node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (/usr/src/app/node_modules/bluebird/js/release/promise.js:690:18)
at _drainQueueStep (/usr/src/app/node_modules/bluebird/js/release/async.js:138:12)
at _drainQueue (/usr/src/app/node_modules/bluebird/js/release/async.js:131:9)
at Async._drainQueues (/usr/src/app/node_modules/bluebird/js/release/async.js:147:5)
at Immediate.Async.drainQueues (/usr/src/app/node_modules/bluebird/js/release/async.js:17:14)
at runCallback (timers.js:810:20)
at tryOnImmediate (timers.js:768:5)
at processImmediate [as _immediateCallback] (timers.js:745:5)
It’s very important to tell that I have different apps that run on the same K8s cluster with the same DB, same auth, and it’s not crashing.
Any suggestions?
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 15
- Comments: 85 (19 by maintainers)
Commits related to this issue
- Run all queries in transaction to avoid connection errors https://github.com/sequelize/sequelize/issues/10858\#issuecomment-549817032 — committed to pubpub/pubpub by kalilsn 10 months ago
@demian85 I was seeing this error as well and it seems that in my case it happened when some sequelize calls within a transaction weren’t using the parent transaction. So something like:
It seems that these calls blocked the transaction for larger amount of time and other transactions failed with
SequelizeConnectionAcquireTimeoutError
. I am not sure if this could be related though.提升配置只是延缓了报错的时机。我这个报错的真正原因是因为k8s会发送非常大流量的嗅探 head请求,因为我之前不恰当的使用方式使每一次head请求都会触发一次db的查询,一段时间后,就批量会报这个错误。我后面的处理方式是在express的app注册回调的最前面 app.head(‘/’, (req, res) => { res.send(“ok”) }); 直接拦截掉head请求,使之不会往下一步的回调走,解决了这个问题。
I’m using the latest version and this creepy error still appears randomly. I’m also using the default config values. The worst part is that my express app completely crashes and stays in a “zombie mode” where I need to reset the server and start from scratch or nothing will ever work again. Any ideas?
@demian85 You’re one of the first persons I’ve seen that has been experiencing the same issue. Very accurate description.
Take a look at this: https://blog.logrocket.com/why-you-should-avoid-orms-with-examples-in-node-js-e0baab73fa5/
Not saying you should follow the headlines however I found it very insightful. Let me know what you think and if you have resolved the issue. I’ll be sure to do the same
@demian85 I couldn’t solve this error by any mean. After a lot of investigations I went back to the root of the problem. “Why without any changes this error happens now all the time?” … end after some research I noticed that I had this error because my volume of data increased. So probably this error is related to that, so I monitorized all my queries and found a very nasty one… I changed it and modify my entities accordingly… now everithing is fine, without any other change.
I don’t see how waiting 60 seconds instead of 30 seconds would be a solution. At least in my system even 2 seconds is a long time to wait. Besides, downgrading to pg 7.16.0 solved the issue.
We’re still monitoring this thread
SequelizeConnectionAcquireTimeoutError
means all connections currently in the pool were in-use and a request to access a connection took more than 60 seconds (default timeout).This can typically happen if there are too many requests being made and the pool is too small.
If the pool size is not the issue, unfortunately the stacktrace is not useful to us as it only lets us know that the pool is saturated, not why all connections are in-use.
Consider taking a look at what the connections are doing (if using postgres, pgadmin has a dashboard that displays that nicely). If they’re not doing anything (ilde), but sequelize’s pool says they are all in-use, then we likely have a bug in Sequelize.
If that is the case, please let us know which version of Sequelize you’re using, as well as the name of the database, its version, and the version of the npm library to use to connect to it
what do you mean by “optimized queries”? what does SQL optimization have to do with that error? I’ll try to find any transaction issues and report back.
On Tue, Nov 5, 2019 at 1:00 PM Iszlai Lorand-Edmond < notifications@github.com> wrote:
Reopening indicates that there is something that needs to be fixed on our end. I don’t have any evidence of that currently. We’ll still monitor the thread but I prefer to keep the issue close unless we actively need to fix something.
All new comments I’ve seen so far fall in either of these two categories, which isn’t caused by Sequelize.
If you don’t await them then in the loop (e.g. collect all promises outside the loop then use
Promise.all
on them) yes it’s going to run all of these operations in parallel which is going to require a new connection for each of them. If you await each operation in the loop, it’s going to run in series. You want the first one for performance. If your number of operations is too big, you’ll run out of connections but you have a few options (it’s not a Node issue, just a parallelization issue. You’d have that in Java too, but Java would run these operations in series by default):You’d have the same issue in Java if you tried to parallelize this work. If you don’t, then it’s going to run in series and will be very slow. It’s the same thing, just that Node makes it easier to parallelize by default.
Same issue here, solved by restart the server temporary, but need a solution.
Config:
Keep in mind that if you use
new Sequelize({ pool: { max: 1 }, /* (...) */ })
(common config for AWS Lambda functions) and useModel.findAndCountAll()
, you may experienceSequelizeConnectionAcquireTimeoutError
errors for expensive (latency-wise) queries.See #12633 for more details.
I found at least one place in my code that could lead to this error. Anyone facing the same issue be sure that you commit or rollback all transactions first. In my case I forgot to commit a transaction in one of my
if
branches.In order check whether some PostgreSQL transactions are idle one could execute the following SQL-query (
geobase
is database name in the example):So after some investigation i found out that my problem was that one of my queries causes multiple dead lock in my MSSQL db, which eventually raises the
SequelizeConnectionAcquireTimeoutError
error.I fixed the dead lock issue and
SequelizeConnectionAcquireTimeoutError
error was disappear without any configuration changes.I’ve applied this patch and not more timeout until now (7 days and conting)
thanks!
Make sure to roll back/commit your transaction. My case was initiating the transaction and in certain flow, I return a response without rolling back or committing my transaction.
The patch has been released https://github.com/sequelize/sequelize/releases/tag/v6.20.1 Please let me know if you’re still experiencing the issue with this change 😃
I can confirm that I managed to replicate the exact scenario I described above. I’ll write something that kills the connection if
COMMIT TRANSACTION
orROLLBACK TRANSACTION
errors (for any reason, be it a network issue, or something else).Can’t guarantee this solves the whole issue, but it may be part of it for some people.
I had this issue as well, solve it by optimizing my queries… I tried all the “hot fixes” and configs that i found without any result, then I tackled the situation and rethink my queries and entities… now everything runs well without any config change.
@TrustyMan Does a larger
acquire
solve your issue?And to reiterate
Since I know many people reading this thread are going to try and manually track the transaction object to pass it to each query. This would indeed be a correct solution however it’s error prone (especially in nested functions) and forgetting it in a single place can deadlock your database connections. (If you are holding a transaction connection, max your db connections and then try to open a new connection)
Please see here to learn how to add transactions to all queries inside a transaction block automatically: https://sequelize.org/master/manual/transactions.html#automatically-pass-transactions-to-all-queries
EDIT: And for learning purposes since this has gotten a lot of attention. I’m copying a minimal reproduction snippet of mine from another thread.
This
This is not a bug but an expected behaviour of transactions. Sequelize makes this error a bit easy to make but I digress.
Created a bug for this: https://github.com/sequelize/sequelize/issues/11995
After hours of debugging, I found a pattern which is not easy to be found when using manual transaction commit and rollback in async/await:
@jzavisek, finally realized that you were right. A forgot to pass my transaction to inner function and transaction with
default
identifier was created instead. In result my transaction was idle for 60 seconds and in final specifiedSequelizeConnectionAcquireTimeoutError
was thrown. Now everything works for me, even after upgrade topg@7.17.1
.@tapz, I faced the simlar issue few days ago but was upgrading pg from 7.14.0 to 7.15.1. So 7.17.0 is not an issue in my case for sure. Probably, it’s caused by some code refactoring I made earlier, so I have to check it first. Is it possible that some unclosed transaction leads to this error?
The same error.
ConnectionAcquireTimeout errors are notoriously difficult to debug because they are usually a symptom of various problems that are difficult to pin down
The documentation has a section about the error that can help point you in the right direction. It’s written for v7, but is relevant to v6 as well: https://sequelize.org/docs/v7/other-topics/connection-pool/#connectionacquiretimeouterror
If we enable CLS, is it possible to add a hook to report when the CLS automatically pass the transaction to our query?
This can help us catch coding bugs.
Thanks @saqib-ahmed for the tips though the same code was working fine with MySQL but since we moved to MariaDB, I am seeing these errors. Also, as the project has been live for more than a year with managed transactions, it would be a BIG change to update it to manual transactions. Will require a lot of time to re-test everything. I’ll see if the pool settings helps.
Thanks for your response @ephys. I’ll have a look at pgAdmin and let you know what I find.
@arbieo I think we should turn that on by default once we replace CLS with node’s built-in
AsyncLocalStorage
.@dijonkitchen I think I had the same issue. I could stop having this error with one of the follow 3 methods :
None of those seem entirely satisfying
I don’t understand why it works outside a transaction. If it’s only a matter of concurrent connections, what does it change that the queries are inside a transaction ?
Since this is the top search result on google: sequelize has a way to automatically pass transactions to all your calls documented here:
https://sequelize.org/master/manual/transactions.html#automatically-pass-transactions-to-all-queries
So you can make your code shoot-yourself-in-the-foot proof
Neither downgrading nor increasing the timeout settings are ideal solutions, however, both solved the problem, so when someone finds this issue she can choose which way will be better for its proposal…
I get this issue during the development of a task that envolves several inserts (more than 20k) and I solved increasing the
acquire
attribute of the pool option (from 30 seconds to 60 seconds), my configuration below:I don’t know if this is the same issue… but sequelize v5 is not workable for us at all at the moment. We are using
MySql
I turned on
logging:console.log
and for some reason now…Executing (default): SELECT 1+1 AS result
is firing ALL over the place… its happeningat least or two times between EVERY db call
… and its getting logged1000+
times eventually… even tho no database calls are executing… then it just dies with Operation Timeout and it comes to a halt.UPDATE:: … whats extremely odd… is this
SELECT 1+1 AS result
log seems to be tied to the number of results that come back from my queries… for example… after my user search page… loads (which has a limit of 25)… theSELECT 1+1 AS result
is logged 25 times… if I search fornate
and it returns 3 users… it gets logged 3 times at the end…UPDATE 2:: My issue was not really related… I had added this as part of the upgrade… however… it was happening whenever I grabbed my sequelize instance… instead of when it created it. (i feel dumb)…