sequelize: TimeoutError: ResourceRequest timed out

What you are doing?

Testing the connection to a postgres amazon rds instance

const sequelize = new Sequelize('dbname', 'username', 'password', {
  host: 'example.rds.amazonaws.com',
  dialect: 'postgres'
});


sequelize
  .authenticate()
  .then(() => {
    console.log('Connection has been established successfully.');
  })
  .catch(err => {
    console.error('Unable to connect to the database:', err);
  });

What do you expect to happen?

App should connect and should output in console: “Connection has been established successfully”

What is actually happening?

Unable to connect to the database: { TimeoutError: ResourceRequest timed out
    at ResourceRequest._fireTimeout (\myapp\node_modules\generic-pool\lib\ResourceRequest.js:58:17)
    at Timeout.bound (\myapp\node_modules\generic-pool\lib\ResourceRequest.js:8:15)
    at ontimeout (timers.js:380:14)
    at tryOnTimeout (timers.js:244:5)
    at Timer.listOnTimeout (timers.js:214:5) name: 'TimeoutError' }

Dialect: postgres Database version: 9.6 Sequelize version: 4.2.1

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 15
  • Comments: 134 (24 by maintainers)

Commits related to this issue

Most upvoted comments

I have fixed the issue by maximizing the acquire option of pool configuration in sequelize

pool: {
    max: 5,
    min: 0,
    idle: 20000,
    acquire: 20000
}

I guess Amazon RDS runs a lot of checks when asking for resources.

Also I am running the Free Tier of Amazon RDS.

I have the same exact issue!

Having same issue. Using postgres 9.6

(node:61927) UnhandledPromiseRejectionWarning: TimeoutError: ResourceRequest timed out
    at ResourceRequest._fireTimeout (/Users/McFlat/indexer/node_modules/generic-pool/lib/ResourceRequest.js:62:17)
    at Timeout.bound (/Users/McFlat/indexer/node_modules/generic-pool/lib/ResourceRequest.js:8:15)
    at ontimeout (timers.js:466:11)
    at tryOnTimeout (timers.js:304:5)
    at Timer.listOnTimeout (timers.js:267:5)

Turns out Sequelize isn’t production ready yet after all. Doesn’t surprise me! Man will never be like the father in heaven, deny it all you want, but that’s the reality. youtube search “Shaking My Head Productions”. Can’t depend on any of this tech crap, it’s all garbage from the devil himself, that’s the real nature of the problem if you think about it deep enough.

I cannot reproduce the error. Obviously there was a timeout in connecting to the DB server, it doesn’t look like a bug in Sequelize to me.

My reason fot this issue was unhandled transaction (make bunch of transaction, no commit or rollback).

I found the way to reproduce this issue

The way is as follows:

  1. Prepare MySQL Server
  2. Install sequelize and mysql2 packages
  3. Start MySQL Server
  4. Execute the following code. While executing, you must start and stop the MySQL Server according to the instructions in the comments.
'use strict';

const readline = require('readline');
const Sequelize = require('sequelize');

const rl = readline.createInterface({ input: process.stdin, output: process.stdout });

async function main() {
  const sequelize = new Sequelize('test', 'root', '', { dialect: 'mysql' }); // Set your db params 

  // First, you need to acquire a connection successfully once.

  await q('Start mysql server and press return key');
  await sequelize.transaction(async () => {}); // try to acquire a connection

  // After succeed acquiring, stop mysql server
  await q('Stop mysql and press return key')

  // Acquire a connection again, and fail it
  try {
    await sequelize.transaction(async () => {}); // try to acquire a connection
  }
  catch (err) {
    // and fail it
  }

  // Start mysql server again
  await q('Start mysql server and press return key');

  // Here, we will never acquire a connection again :(

  try {
    await sequelize.transaction(async () => {});
  }
  catch (err) {
    // ResourceRequest timed out error will occur
    console.error(err);
  }
}

function q(message) {
    return new Promise((resolve, reject) => rl.question(message, resolve));
}

main();

Why does the problem occur?

The problem is, I think, caused by the following mechanism:

  1. Although sequelize.connectionManager.dialect.connectionManager.connect(config) is rejected, factory.create method, the first argument of Pooling.createPool, will never be rejected or resolved.
  2. An operation, that will never be deleted, is added to pool._factoryCreateOperation.
  3. pool._potentiallyAllocableResourceCount is never less than pool._factoryCreateOperation.size.
  4. Thus, pool.acquire will wait for a request that will never be resolved. It will cause a ResourceRequest timed out error.

I really want a pool:false option…

@iamakimmer yeah basically had to binary search my code… I wrote an integration test that could duplicate the issue, then console.logged() everywhere to see where I was freezing. After wasting the whole day trying to figure out what was going on I decided to let it be, and tackle it the next day with a fresh start. I literally put console.logs() after every function in my code path to see exactly where the app was freezing/timing out at. Once I found the function, I noticed a database call was being made that wasn’t part of the transactions.

So with a connection pool of 5… we actually triggered the same long-running transaction’s code-path 5x… so each connection was in use by a transaction, and all other queries were stuck waiting for a transaction to finish so a connection could become available.

So I was basically doing this

transaction(() => {
   doThing1();
   doThing2();
   doThing3();
});

each doThing() called a series of other functions… in doThing3() probably 10 function calls down we were making another query that wasn’t part of the transaction… So like I said, the non-transaction query was waiting for a connection, and and doThing3() wouldn’t finish until that query was done. So the transaction never releases a connection, the query never gets a connection to finish to allow the transaction… yadayada… deadlocked.

Same issue here. The connection is established but when the connection of any connection is lost, the pool is getting smaller and after death of all threads application goes down. We had no troubles with this with 3+ version of Sequelize, so I guess it’s not problem of db connection. Also tried downgrade from 4.2.1 (latest) to 4.1.0 and same problem.

In pool settings just keep the minimum count to about 2 to 4. This solved the issue for me.

  pool: {
    max: 7,
    min: 2,
    acquire: 30000,
    idle: 10000,
  },

The ResourceRequest timed out error appears after prolonged usage of Sequelize on a production server. Does anyone know what could be the problem?

The error gets fixed when I restart the project. Am I handling something incorrectly?

I have the acquire 20s in my pool config.

@milesalex when you see ResourceRequest timed out error it tends to suggest that all the connection in your pool were busy being used and the waiting requests for a connection timed out. You probably have some slow or long running queries. There’s no generic way to fix this but good things to look at include

  • increasing the max connections in the pool (you should check if your database has a upper limit though)
  • checking your database to see if it has a slow query log or anyway to tell you about long running queries and then seeing if you have missing indexes or could rewrite the queries to be more efficient
  • increasing the cpu/ram of your database server

I see the same issue:

(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: timeout exceeded when trying to connect
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2): Error: timeout exceeded when trying to connect
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 3): Error: timeout exceeded when trying to connect
....
Unable to connect to the database using sequelize: { TimeoutError: ResourceRequest timed out
    at ResourceRequest._fireTimeout (/srv/app/node_modules/generic-pool/lib/ResourceRequest.js:58:17)
    at Timeout.bound (/srv/app/node_modules/generic-pool/lib/ResourceRequest.js:8:15)
    at ontimeout (timers.js:386:11)
    at tryOnTimeout (timers.js:250:5)
    at Timer.listOnTimeout (timers.js:214:5) name: 'TimeoutError' }
...

Bit of a drive by comment, but in case it helps someone: We were getting hundreds of ResourceRequest errors. Naturally, we started trying to tweak the connection pool settings, etc. It turned out we had some expensive calculations completely outside of sequelize that were starving the event loop, and that was meaning that anything else could not happen in time. It just so happened that “anything else” was 95% interacting with the database, the first step of which is to grab a connection from the pool. So, that’s where we saw the failure.

var sequelize = new Sequelize(process.env[config.use_env_variable], { pool: { max: 5, min: 0, idle: 20000, acquire: 20000 } }); This is what I have, acquire is set to 20000 and it’s still happening. I didn’t get any of the timeout issues until recently

Actually we need this fixed on generic-pool side, if acquire call never resolves we cant be sure if this is a network lag or factory create error. Old version of generic-pool used to properly reject for acquire calls.

We have three options

  1. Someone fix that generic-pool issue, acquire calls properly rejects and there is no leftover acquire call waiting for getting resolved, hence not keeping pool busy forever.
  2. Revert back to old version of generic-pool (may be breaking as we expose pool API in pool config)
  3. Switch pooling library or create a pooling library for Sequelize (Lots of work)

I would prefer (1) but I dont have any extra time to dive into generic-pool, I am pretty busy with Sequelize already 😃

Edit:

If someone wants to fix this, Please checkout these issues

https://github.com/coopernurse/node-pool/issues/175 https://github.com/coopernurse/node-pool/issues/183 https://github.com/coopernurse/node-pool/issues/161

I’m having the exact same issue! The above pool configuration did NOT work. In our production environment, in a 24h period this error is logged ~5,000 times. It’s been consistent for the last 27 days. I too am on the latest v4 of sequelize and using AWS RDS. It would be great to have this looked into more closely as it’s a serious risk in the latest release.

image

Update:

Found this issue yesterday https://github.com/sequelize/sequelize/issues/10976

Turns out that we had a case of a concurrent nested transaction stalling a parent transaction, which resulted in a connection to hang on the parent transaction. When this case was reached n (where n = our max pool size) times that filled our pool with hanging connections, we’d get bursts of time that ended up degrading our service. We have health checker that will kill degraded instances, so this didn’t give us complete downtime; instantiating new instances put off the problem until we reached that all-connections-in-pool-hanging state again.

We solved this by refactoring to split out the concurrent nested transaction from the parent (they weren’t actually dependent on each other). Our initial code would have the nested transaction fail silently, leaving the parent waiting for it to finish. Also did a pass through the codebase to ensure that anywhere we were doing nested transaction calls, we were passing transactions properly.

We were able to replicate the issue in staging doing something similar to the code in the issue above, so I’m fairly confident this was our issue; after shipping our fix below we aren’t seeing the same results.

Disclaimer: may or may not understand the full intricacies of connections/transactions/pooling, so take with a grain of salt!

We have upgraded to v5 and seems like it helped for some reason. But it was such a shot in the dark.

Just came to say that async.queue worked for us as well. Code:

const queue = require('async').queue;

async function massUpdate() {
  const items = [... BIG LIST ...];
  return new Promise<User>(async (resolve, reject) => {
      // We need async queue to prevent the DB acquire lock from timing out
      // We have 5 workers, so leave 1 to handle request while 4 work on this insert
      const asyncQueue = queue(async (item) => (
        Model.insert(item)
      ), 4);

      // Set callbacks
      asyncQueue.drain = () => {
        winston.info(`INSERTED ${items.length} rows`);
        resolve();
      };

      asyncQueue.error = (err) => {
        reject(err);
      };

      // Add all items to the queue
      asyncQueue.push(items);
    });
  };
}

I am experiencing this issue as well using sequelize v4.23.1 (newest version).

Steps to reproduce:

  1. use replication for 2 read instances
  2. start up server/sequelize and hit it a few times
  3. turn off one of the dbs (i.e. just shut down the db, not the whole machine that the db is running on)
  4. continue to hit the server/sequelize (it will intermittently return 500 due to TimeoutError: ResourceRequest timed out
  5. Eventually, it seems to get completely stuck in an unrecoverable state (short of completely restarting it). It continuously returns the error even if the one db is still running.

Here is my pool config:

pool: {
     max: 5,
     min: 0,
     idle: 5000,
     acquire: 20000,
    evict: 30000,
     handleDisconnects: true
 }

There is always an chance that connection isnt acquired in 20 sec, set to a higher value if its timing out. You could set this to 0 to get v3 behavior.

Yeah, I’m having this same issue on a DigitalOcean server as well. I’ve tried the configuration from above with no luck.

For those of you still using v4 of Sequelize, this problem may have been fixed by v4.44.1.

See PR: https://github.com/sequelize/sequelize/pull/11140 See Issue: https://github.com/sequelize/sequelize/issues/11139

I am experiencing similar issues. The database’s logging shows that there were never more than 6 out of 10 parallel connections. Furthermore all queries from this process finished in less than 3ms., so I do not think that this issue is (only) caused by inefficient database queries. I am using the postgres driver.

Updated to 4.22.5 but still run into the issue ResourceRequest timed out sometimes.

My pool config

dialect: 'mssql',
    pool: {
      max: 5,
      min: 1,
      idle: 20000,
      evict: 20000,
      acquire: 20000
    },

Please advise @sushantdhiman

@xdamman I noticed your pool config isnt using any acquire, https://github.com/opencollective/opencollective-api/blob/2194992a865d16bea9000cab5077488c61d302d5/config/production.json#L11

If you follow this thread its stated many times that v3 and v4 have different acquire time. For v4 its 20sec, are you sure 20sec is enough for your case. If its not increase it.

In v3 you wont have this timeout because acquire was set to 0, which means sequelize will wait for ever to get connection, which is not ok either.

You just need to increase acquire time to something that is suitable to your usercase

as @btroo and @mickhansen said, This issue happened when the count of concurrent transactions is greater than your pool.max. I’m trying to describe it in a simple way, Let’s see what happens: What is the connection pool?

When your application needs to retrieve data from the database, it creates a database connection. Creating this connection involves some overhead of time and machine resources for both your application and the database. Many database libraries and ORM’s will try to reuse connections when possible so that they do not incur the overhead of establishing that DB connection over and over again. The pool is the collection of these saved, reusable connections that

what is concurrent transactions? in a simple way, if you are trying to run some CRUD command in a nested transaction, it means you are using concurrent transactions. let’s assume our Sequlize config is like this:

  pool: {
    max: 7,
    min: 0,
    acquire: 30000,
    idle: 10000,
  },

we are trying to run this command:

      let nested = function() {
        // here we are making a new transaction:
        return db.sequelize.transaction(t1 => {
          // as you see in bellow command we won't use t1 transaction
          // which means our DB (in my case postgress) run below command in other transaction
          // and this is where the problem lies!
          // when we'll execute nested function, we'll create a concurrent transaction per each call
          return db.user.findOne();
        });
      };
      const arr = [];
      for (let i = 0; i < 7; i++) {
        arr.push(nested());
      }
      Promise.all(arr)
        .then(() => {
          console.log('done');
        })
        .catch(err => {
          console.log(err);
        });

The workaround is just run db.user.findOne() in t1 transaction to avoid concurrent transactions and it will be fixed: db.user.findOne({transaction: t1}) you can also use continuation-local-storage to pass transaction automatically, just check below link: https://sequelize.org/master/manual/transactions.html#concurrent-partial-transactions

For reference my company has been running Sequelize against RDS for 4+ years with no issues, if there were ever connection issues it’s cause there were actually connection issues.

@lcandiago If I read your code correctly, you are syncing the Product model for every row, and you are actually running all queries in parallel. This is where your timeout comes from. Just put a console.log before const product = … and you’ll see what I mean. Your console.log will probably have time to execute 21497 before a handful of queries complete.

Sync your Product model beforehand and change your forEach for a plain old for loop such as this :

for(let i = 0; i < rows.length; ++i) {
   const product = {
      ...
   }
   try {
      await Product.upsert(product);
      console.log(...);
   }
   catch(...) {
   }
}

You will then run all queries sequentially and this should to the trick. You could try to run them in batches of 5, 10, 20, etc. to get better performance.

Has this bug ever been figured out? Still getting it…

Error: { TimeoutError: ResourceRequest timed out
    at ResourceRequest._fireTimeout (/app/node_modules/generic-pool/lib/ResourceRequest.js:62:17)
    at Timeout.bound (/app/node_modules/generic-pool/lib/ResourceRequest.js:8:15)
    at ontimeout (timers.js:427:11)
    at tryOnTimeout (timers.js:289:5)
    at listOnTimeout (timers.js:252:5)
    at Timer.processTimers (timers.js:212:10) name: 'TimeoutError' }
  File "/app/node_modules/generic-pool/lib/ResourceRequest.js", line 62, col 17, in ResourceRequest._fireTimeout
    this.reject(new errors.TimeoutError("ResourceRequest timed out"));
  File "/app/node_modules/generic-pool/lib/ResourceRequest.js", line 8, col 15, in Timeout.bound
    return fn.apply(ctx, arguments);
  File "timers.js", line 427, col 11, in ontimeout
  File "timers.js", line 289, col 5, in tryOnTimeout
  File "timers.js", line 252, col 5, in listOnTimeout
  File "timers.js", line 212, col 10, in Timer.processTimers
  File "/app/node_modules/raven/lib/client.js", line 407, col 15, in Raven.captureException
    err = new Error(err);
  File "/app/node_modules/winston-raven-sentry/index.js", line 93, col 23, in Sentry.log
    return this.raven.captureException(message, context, function() {
  File "/app/node_modules/winston/lib/winston/logger.js", line 234, col 15, in transportLog
    transport.log(level, msg, meta, function (err) {
  File "/app/node_modules/winston/node_modules/async/lib/async.js", line 157, col 13, in null.<anonymous>
    iterator(x, only_once(done) );
  File "/app/node_modules/winston/node_modules/async/lib/async.js", line 57, col 9, in _each
    iterator(arr[index], index, arr);
  File "/app/node_modules/winston/node_modules/async/lib/async.js", line 156, col 9, in Object.async.each
    _each(arr, function (x) {
  File "/app/node_modules/winston/lib/winston/logger.js", line 246, col 9, in exports.Logger.Logger.log
    async.forEach(targets, transportLog, finish);
  File "/app/node_modules/winston/lib/winston/common.js", line 54, col 18, in exports.Logger.target.(anonymous function) [as error]
    target.log.apply(target, args);
  File "/app/src/lib/exchanges.js", line 38, col 12, in handleTransactionModel
    logger.error(err, {

I was getting the same error due to trying to write 1842 records to mysql concurrently. I started using Promise.map with concurrency option of 250 and I stopped getting the timeout.

I am encountering the same problem on a DigitalOcean server. Do you have any updates on this issue?

I have also tried swordfish444 solution, but it didn’t work.

We’ve bumped into the same error. It’s weird, but we don’t have any transactions in that piece of code. It looks like a bunch of asynchronous queries to the DB, two of them use sequelize.query() to query data and the rest 8 are made with models. I don’t know why exactly, but calling one of the .query() after the others resolved somehow fixed the issue. So, we first run all those 9 queries simultaneously, and after they resolved we run the latest one. Only this worked for us.

Thanks, Your solution solved my issue.

There’s always been some issues with having N concurrent transactions where N is greater than your pool.max, i forgot the specific case but you could end up with code waiting for a connection to do something to finish up the transactions but the transactions had all the connection etc.

We upgraded to v5 as well. With no further sequelize (i. e. pooling) configuration other than that, we still get SequelizeConnectionAcquireTimeoutError: Operation timeout. Anecdotally, this timeout-ing seems to occur less, but haven’t measured/monitored closely in our deployment.

Will update if we find a configuration or bugfix that eradicates our issues completely!

I have a .txt file with 21497 rows and I read this file with fs.readFile and forEach row, I sync with Product table and insert the new records or update the existing ones.

image

A lot of records are inserted or updated on the Postgres database, but the major part I get this error:

image

If I try to run again on the same file, new records are inserted, but the error continues on other records.

Maybe it’s a lot of processing at the same time and some records are timed out. But how can I solve this? I tried to increase the max pool value but doesn’t work.

Had same problem for MySQL on AWS - it appeared that the issue was caused by high CPU utilization on ECS nodes (up to 100%) - as @milesalex suggested. Solved by adding more nodes to ECS cluster (CPU utilization went to 10% avg)

We’ve bumped into the same error. It’s weird, but we don’t have any transactions in that piece of code. It looks like a bunch of asynchronous queries to the DB, two of them use sequelize.query() to query data and the rest 8 are made with models. I don’t know why exactly, but calling one of the .query() after the others resolved somehow fixed the issue. So, we first run all those 9 queries simultaneously, and after they resolved we run the latest one. Only this worked for us.

For anyone running against Azure Managed Postgre DB:

We have discovered that the PG driver does not close sockets properly when using the Azure PG DB that can cause connection timeouts due to port exhaustion. More info here: https://twitter.com/Vratislav_/status/983249713825804288

Fix is pending here:

https://github.com/brianc/node-postgres/pull/1608

Hi guys. We’ve just been bitten by this one. Experiencing a very high number of ResourceRequest time outs when running under production load.

Putting this here to help anyone else out that comes across this ticket when migrating to v4. It looks like there are still some others out there scratching their head.

It turns out that it acquire times were a bit of a red hearing. Increasing the acquire time would only slightly alleviate the issue. Benchmarking our v3 times to v4, it was clear that there was a significant performance impact when running under v4. Requests for connections from the pool were very likely to hit the 10s default timeout because everything was running so slow.

We have traced this to a change in how a paranoid model deletion is handled in v4.

Under v3 a select over a paranoid model would be ...WHERE "MyModel"."deleted" IS NULL

Under v4 a select over a paranoid model is ...WHERE ("MyModel"."deleted" > '2017-10-26 05:23:19.646 +00:00' OR "MyModel"."deleted" IS NULL

This change in paranoid lookup misses our existing indexes that worked under v3. Query plans which used to be quick index conditions are now full sequence scans. Everything slows down and what do you know ResourceRequest timeouts everywhere.

This change was introduced in https://github.com/sequelize/sequelize/pull/5897

pool config { max: 100, min: 0, idle: 10000, acquire: 10000, } The main thing here for me was that I created swap space. When i initially added the acquire, it still was not working. I guess, I ran out memory because i was inserting about 19,000 records during that operation. But after creating the swap space and adding the acquire param, i was good to go. I am actually running on a 1 gig memory digital ocean server…

Still experiencing the bug on v4.44.1.

It might be worth just migrating to v5 and to see if it’s been resolved there.

For anyone coming to this thread later, checkout the changelist for v5. It seems to address this issue: https://github.com/sequelize/sequelize/blob/master/docs/upgrade-to-v5.md

If anyone out there is facing the issue and the pool thing(in above comments) is not solving your problem. Have a look at the following comment by @michaelwayman

https://github.com/sequelize/sequelize/issues/7884#issuecomment-338778283

In a nutshell. What @michaelwayman is trying to bring up is that if you have multiple queries happening in a particular function and some of them uses a transaction & some of them don’t. Try to manage them in a way that queries not using transactions dont have a transaction already initialized by sequelize.transaction

This analysis helped me solve my problem. Thanks @michaelwayman

@sandfox, I met the same problems. When I increased the max connections in the pool, the Exception still happened with same frequency.

@iamakimmer We’re using pm2 (from keymetrics.io) as our process manager which restarts the process as soon as an exception gets thrown, so I couldn’t tell. We’re running 8 processes, all of which now have a 10 connections in their pool (was 5 before changes), but I believe that fixing / better managing our long-running queries was the key here.

I deployed our new configuration in production 5 days and didn’t have a timeout since, whereas we were getting 4-5 timeouts a day before the changes.

I played a bit more with this during the weekend and can now easily reproduce the issue by consuming all connections with somewhat long-running queries (1 second) and many concurrent requests. Taking some time to test different pool options and fixing problematic queries (using postgres, found queries with missing indexes in some places using the slow query log) helped a lot and gives me a bit more confidence about the future.

I think at this point, it’s about making sure your queries are optimized and finding the proper pool options for your setup.

@xdamman, no long term solution for us at this stage, the discussion has moved over here https://github.com/sequelize/sequelize/issues/8496

This will only be a problem for you if you currently exclude soft deleted records from your indexes. E.g you have a model set up like this

paranoid: true,
indexes: [{
    fields: [ 'email' ],
    where: {
        deletedAt: null,
    },
}],

If you do not have a paranoid model and you have not manually excluded soft deleted records you should not be experiencing a slowdown from this particular issue.

@iamakimmer I’m experiencing it with Postgres and v4.10. Heavy loads of queries running via cron jobs.