sequelize: TimeoutError: ResourceRequest timed out
What you are doing?
Testing the connection to a postgres amazon rds instance
const sequelize = new Sequelize('dbname', 'username', 'password', {
host: 'example.rds.amazonaws.com',
dialect: 'postgres'
});
sequelize
.authenticate()
.then(() => {
console.log('Connection has been established successfully.');
})
.catch(err => {
console.error('Unable to connect to the database:', err);
});
What do you expect to happen?
App should connect and should output in console: “Connection has been established successfully”
What is actually happening?
Unable to connect to the database: { TimeoutError: ResourceRequest timed out
at ResourceRequest._fireTimeout (\myapp\node_modules\generic-pool\lib\ResourceRequest.js:58:17)
at Timeout.bound (\myapp\node_modules\generic-pool\lib\ResourceRequest.js:8:15)
at ontimeout (timers.js:380:14)
at tryOnTimeout (timers.js:244:5)
at Timer.listOnTimeout (timers.js:214:5) name: 'TimeoutError' }
Dialect: postgres Database version: 9.6 Sequelize version: 4.2.1
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 15
- Comments: 134 (24 by maintainers)
Commits related to this issue
- fix(abstract/connection-manager): acquire call never finishs This is basically a hack but pooling library has given us no options. Pool acquire call now resolves even if there is an error. We simply ... — committed to sequelize/sequelize by sushantdhiman 7 years ago
- changed the acquire time as suggested by @crispkiwi https://github.com/sequelize/sequelize/issues/7884#issuecomment-341409715 — committed to opencollective/opencollective-api by xdamman 7 years ago
- Sequelize timeout issue There appears to be a timeout issue in the Sequelize ORM right now as discussed in https://github.com/sequelize/sequelize/issues/7884 This commit adjusts the configuration so... — committed to slifty/tvkitchen-datachecker by slifty 7 years ago
- fix #7884 — committed to leyserkids/sequelize by ewfian 2 years ago
- fix #7884 TimeoutError: ResourceRequest timed out — committed to leyserkids/sequelize by ewfian 2 years ago
I have fixed the issue by maximizing the acquire option of pool configuration in sequelize
I guess Amazon RDS runs a lot of checks when asking for resources.
Also I am running the Free Tier of Amazon RDS.
I have the same exact issue!
Having same issue. Using postgres 9.6
Turns out Sequelize isn’t production ready yet after all. Doesn’t surprise me! Man will never be like the father in heaven, deny it all you want, but that’s the reality. youtube search “Shaking My Head Productions”. Can’t depend on any of this tech crap, it’s all garbage from the devil himself, that’s the real nature of the problem if you think about it deep enough.
I cannot reproduce the error. Obviously there was a timeout in connecting to the DB server, it doesn’t look like a bug in Sequelize to me.
My reason fot this issue was unhandled transaction (make bunch of transaction, no commit or rollback).
I found the way to reproduce this issue
The way is as follows:
Why does the problem occur?
The problem is, I think, caused by the following mechanism:
I really want a pool:false option…
@iamakimmer yeah basically had to binary search my code… I wrote an integration test that could duplicate the issue, then console.logged() everywhere to see where I was freezing. After wasting the whole day trying to figure out what was going on I decided to let it be, and tackle it the next day with a fresh start. I literally put
console.logs()
after every function in my code path to see exactly where the app was freezing/timing out at. Once I found the function, I noticed a database call was being made that wasn’t part of the transactions.So with a connection pool of 5… we actually triggered the same long-running transaction’s code-path 5x… so each connection was in use by a transaction, and all other queries were stuck waiting for a transaction to finish so a connection could become available.
So I was basically doing this
each
doThing()
called a series of other functions… indoThing3()
probably 10 function calls down we were making another query that wasn’t part of the transaction… So like I said, the non-transaction query was waiting for a connection, and anddoThing3()
wouldn’t finish until that query was done. So the transaction never releases a connection, the query never gets a connection to finish to allow the transaction… yadayada… deadlocked.Same issue here. The connection is established but when the connection of any connection is lost, the pool is getting smaller and after death of all threads application goes down. We had no troubles with this with 3+ version of Sequelize, so I guess it’s not problem of db connection. Also tried downgrade from 4.2.1 (latest) to 4.1.0 and same problem.
In pool settings just keep the minimum count to about 2 to 4. This solved the issue for me.
The ResourceRequest timed out error appears after prolonged usage of Sequelize on a production server. Does anyone know what could be the problem?
The error gets fixed when I restart the project. Am I handling something incorrectly?
I have the acquire 20s in my pool config.
@milesalex when you see
ResourceRequest timed out
error it tends to suggest that all the connection in your pool were busy being used and the waiting requests for a connection timed out. You probably have some slow or long running queries. There’s no generic way to fix this but good things to look at includeI see the same issue:
Bit of a drive by comment, but in case it helps someone: We were getting hundreds of
ResourceRequest
errors. Naturally, we started trying to tweak the connection pool settings, etc. It turned out we had some expensive calculations completely outside of sequelize that were starving the event loop, and that was meaning that anything else could not happen in time. It just so happened that “anything else” was 95% interacting with the database, the first step of which is to grab a connection from the pool. So, that’s where we saw the failure.var sequelize = new Sequelize(process.env[config.use_env_variable], { pool: { max: 5, min: 0, idle: 20000, acquire: 20000 } });
This is what I have, acquire is set to 20000 and it’s still happening. I didn’t get any of the timeout issues until recentlyActually we need this fixed on
generic-pool
side, ifacquire
call never resolves we cant be sure if this is a network lag or factory create error. Old version ofgeneric-pool
used to properly reject foracquire
calls.We have three options
generic-pool
issue,acquire
calls properly rejects and there is no leftover acquire call waiting for getting resolved, hence not keeping pool busy forever.generic-pool
(may be breaking as we expose pool API inpool
config)I would prefer (1) but I dont have any extra time to dive into
generic-pool
, I am pretty busy with Sequelize already 😃Edit:
If someone wants to fix this, Please checkout these issues
https://github.com/coopernurse/node-pool/issues/175 https://github.com/coopernurse/node-pool/issues/183 https://github.com/coopernurse/node-pool/issues/161
I’m having the exact same issue! The above pool configuration did NOT work. In our production environment, in a 24h period this error is logged ~5,000 times. It’s been consistent for the last 27 days. I too am on the latest v4 of sequelize and using AWS RDS. It would be great to have this looked into more closely as it’s a serious risk in the latest release.
Update:
Found this issue yesterday https://github.com/sequelize/sequelize/issues/10976
Turns out that we had a case of a concurrent nested transaction stalling a parent transaction, which resulted in a connection to hang on the parent transaction. When this case was reached
n
(wheren
= our max pool size) times that filled our pool with hanging connections, we’d get bursts of time that ended up degrading our service. We have health checker that will kill degraded instances, so this didn’t give us complete downtime; instantiating new instances put off the problem until we reached that all-connections-in-pool-hanging state again.We solved this by refactoring to split out the concurrent nested transaction from the parent (they weren’t actually dependent on each other). Our initial code would have the nested transaction fail silently, leaving the parent waiting for it to finish. Also did a pass through the codebase to ensure that anywhere we were doing nested transaction calls, we were passing transactions properly.
We were able to replicate the issue in staging doing something similar to the code in the issue above, so I’m fairly confident this was our issue; after shipping our fix below we aren’t seeing the same results.
Disclaimer: may or may not understand the full intricacies of connections/transactions/pooling, so take with a grain of salt!
We have upgraded to v5 and seems like it helped for some reason. But it was such a shot in the dark.
Just came to say that async.queue worked for us as well. Code:
I am experiencing this issue as well using sequelize v4.23.1 (newest version).
Steps to reproduce:
TimeoutError: ResourceRequest timed out
Here is my pool config:
There is always an chance that connection isnt acquired in 20 sec, set to a higher value if its timing out. You could set this to 0 to get v3 behavior.
@tiboprea I was able to solve it with async.queue. Checkout this quick article: https://medium.com/the-node-js-collection/processing-a-large-dataset-in-less-than-100-lines-of-node-js-with-async-queue-9766a78fa088
Yeah, I’m having this same issue on a DigitalOcean server as well. I’ve tried the configuration from above with no luck.
For those of you still using v4 of Sequelize, this problem may have been fixed by v4.44.1.
See PR: https://github.com/sequelize/sequelize/pull/11140 See Issue: https://github.com/sequelize/sequelize/issues/11139
I am experiencing similar issues. The database’s logging shows that there were never more than 6 out of 10 parallel connections. Furthermore all queries from this process finished in less than 3ms., so I do not think that this issue is (only) caused by inefficient database queries. I am using the postgres driver.
Updated to 4.22.5 but still run into the issue ResourceRequest timed out sometimes.
My pool config
Please advise @sushantdhiman
@xdamman I noticed your pool config isnt using any
acquire
, https://github.com/opencollective/opencollective-api/blob/2194992a865d16bea9000cab5077488c61d302d5/config/production.json#L11If you follow this thread its stated many times that v3 and v4 have different acquire time. For v4 its 20sec, are you sure 20sec is enough for your case. If its not increase it.
In v3 you wont have this timeout because acquire was set to
0
, which means sequelize will wait for ever to get connection, which is not ok either.You just need to increase acquire time to something that is suitable to your usercase
@felixfbecker It looks like this is the same as https://github.com/sequelize/sequelize/issues/7882
Also https://github.com/sequelize/sequelize/issues/7616 is likely related to the same problem.
as @btroo and @mickhansen said, This issue happened when the count of concurrent transactions is greater than your pool.max. I’m trying to describe it in a simple way, Let’s see what happens: What is the connection pool?
what is concurrent transactions? in a simple way, if you are trying to run some CRUD command in a nested transaction, it means you are using concurrent transactions. let’s assume our Sequlize config is like this:
we are trying to run this command:
The workaround is just run db.user.findOne() in t1 transaction to avoid concurrent transactions and it will be fixed:
db.user.findOne({transaction: t1})
you can also use continuation-local-storage to pass transaction automatically, just check below link: https://sequelize.org/master/manual/transactions.html#concurrent-partial-transactionsFor reference my company has been running Sequelize against RDS for 4+ years with no issues, if there were ever connection issues it’s cause there were actually connection issues.
@lcandiago If I read your code correctly, you are syncing the Product model for every row, and you are actually running all queries in parallel. This is where your timeout comes from. Just put a console.log before const product = … and you’ll see what I mean. Your console.log will probably have time to execute 21497 before a handful of queries complete.
Sync your Product model beforehand and change your forEach for a plain old for loop such as this :
You will then run all queries sequentially and this should to the trick. You could try to run them in batches of 5, 10, 20, etc. to get better performance.
Has this bug ever been figured out? Still getting it…
I was getting the same error due to trying to write 1842 records to mysql concurrently. I started using Promise.map with concurrency option of 250 and I stopped getting the timeout.
I am encountering the same problem on a DigitalOcean server. Do you have any updates on this issue?
I have also tried swordfish444 solution, but it didn’t work.
Thanks, Your solution solved my issue.
There’s always been some issues with having
N concurrent transactions
whereN
is greater than yourpool.max
, i forgot the specific case but you could end up with code waiting for a connection to do something to finish up the transactions but the transactions had all the connection etc.We upgraded to v5 as well. With no further sequelize (i. e. pooling) configuration other than that, we still get
SequelizeConnectionAcquireTimeoutError: Operation timeout
. Anecdotally, this timeout-ing seems to occur less, but haven’t measured/monitored closely in our deployment.Will update if we find a configuration or bugfix that eradicates our issues completely!
I have a .txt file with 21497 rows and I read this file with
fs.readFile
andforEach
row, I sync with Product table and insert the new records or update the existing ones.A lot of records are inserted or updated on the Postgres database, but the major part I get this error:
If I try to run again on the same file, new records are inserted, but the error continues on other records.
Maybe it’s a lot of processing at the same time and some records are timed out. But how can I solve this? I tried to increase the max pool value but doesn’t work.
Had same problem for MySQL on AWS - it appeared that the issue was caused by high CPU utilization on ECS nodes (up to 100%) - as @milesalex suggested. Solved by adding more nodes to ECS cluster (CPU utilization went to 10% avg)
We’ve bumped into the same error. It’s weird, but we don’t have any transactions in that piece of code. It looks like a bunch of asynchronous queries to the DB, two of them use
sequelize.query()
to query data and the rest 8 are made with models. I don’t know why exactly, but calling one of the.query()
after the others resolved somehow fixed the issue. So, we first run all those 9 queries simultaneously, and after they resolved we run the latest one. Only this worked for us.For anyone running against Azure Managed Postgre DB:
We have discovered that the PG driver does not close sockets properly when using the Azure PG DB that can cause connection timeouts due to port exhaustion. More info here: https://twitter.com/Vratislav_/status/983249713825804288
Fix is pending here:
https://github.com/brianc/node-postgres/pull/1608
Hi guys. We’ve just been bitten by this one. Experiencing a very high number of ResourceRequest time outs when running under production load.
Putting this here to help anyone else out that comes across this ticket when migrating to v4. It looks like there are still some others out there scratching their head.
It turns out that it acquire times were a bit of a red hearing. Increasing the acquire time would only slightly alleviate the issue. Benchmarking our v3 times to v4, it was clear that there was a significant performance impact when running under v4. Requests for connections from the pool were very likely to hit the 10s default timeout because everything was running so slow.
We have traced this to a change in how a paranoid model deletion is handled in v4.
Under v3 a select over a paranoid model would be
...WHERE "MyModel"."deleted" IS NULL
Under v4 a select over a paranoid model is
...WHERE ("MyModel"."deleted" > '2017-10-26 05:23:19.646 +00:00' OR "MyModel"."deleted" IS NULL
This change in paranoid lookup misses our existing indexes that worked under v3. Query plans which used to be quick index conditions are now full sequence scans. Everything slows down and what do you know ResourceRequest timeouts everywhere.
This change was introduced in https://github.com/sequelize/sequelize/pull/5897
pool config { max: 100, min: 0, idle: 10000, acquire: 10000, } The main thing here for me was that I created swap space. When i initially added the acquire, it still was not working. I guess, I ran out memory because i was inserting about 19,000 records during that operation. But after creating the swap space and adding the acquire param, i was good to go. I am actually running on a 1 gig memory digital ocean server…
Still experiencing the bug on v4.44.1.
It might be worth just migrating to v5 and to see if it’s been resolved there.
For anyone coming to this thread later, checkout the changelist for v5. It seems to address this issue: https://github.com/sequelize/sequelize/blob/master/docs/upgrade-to-v5.md
If anyone out there is facing the issue and the pool thing(in above comments) is not solving your problem. Have a look at the following comment by @michaelwayman
In a nutshell. What @michaelwayman is trying to bring up is that if you have multiple queries happening in a particular function and some of them uses a transaction & some of them don’t. Try to manage them in a way that queries not using transactions dont have a transaction already initialized by
sequelize.transaction
This analysis helped me solve my problem. Thanks @michaelwayman
@sandfox, I met the same problems. When I increased the max connections in the pool, the Exception still happened with same frequency.
@iamakimmer We’re using pm2 (from keymetrics.io) as our process manager which restarts the process as soon as an exception gets thrown, so I couldn’t tell. We’re running 8 processes, all of which now have a 10 connections in their pool (was 5 before changes), but I believe that fixing / better managing our long-running queries was the key here.
I deployed our new configuration in production 5 days and didn’t have a timeout since, whereas we were getting 4-5 timeouts a day before the changes.
I played a bit more with this during the weekend and can now easily reproduce the issue by consuming all connections with somewhat long-running queries (1 second) and many concurrent requests. Taking some time to test different pool options and fixing problematic queries (using postgres, found queries with missing indexes in some places using the slow query log) helped a lot and gives me a bit more confidence about the future.
I think at this point, it’s about making sure your queries are optimized and finding the proper pool options for your setup.
@xdamman, no long term solution for us at this stage, the discussion has moved over here https://github.com/sequelize/sequelize/issues/8496
This will only be a problem for you if you currently exclude soft deleted records from your indexes. E.g you have a model set up like this
If you do not have a paranoid model and you have not manually excluded soft deleted records you should not be experiencing a slowdown from this particular issue.
@iamakimmer I’m experiencing it with Postgres and v4.10. Heavy loads of queries running via cron jobs.