sequelize: SequelizeConnectionAcquireTimeoutError randomly DB connections pool error

I’m running a very simple app that using sequelize basics, it was working for the recent year without any issues on sequelize v4. Recently, the app is crashing every day once a day, with the following error:

TimeoutError: ResourceRequest timed out
    at ResourceRequest._fireTimeout (/usr/src/app/node_modules/generic-pool/lib/ResourceRequest.js:62:17)
    at Timeout.bound (/usr/src/app/node_modules/generic-pool/lib/ResourceRequest.js:8:15)
    at ontimeout (timers.js:498:11)
    at tryOnTimeout (timers.js:323:5)
    at Timer.listOnTimeout (timers.js:290:5)

It was happening with sequelize 4.42.0, mysql2 v1.6.5, node carbon alpine. After reading a lot about this issue, I figured out that sequelize v5 might handle the connections pool better, so I’ve tried sequelize 5.7.6, mariadb v2.0.3, but nothing, now the error that I’m getting is:

SequelizeConnectionAcquireTimeoutError: Operation timeout
    at pool.acquire.catch.err (/usr/src/app/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:281:53)
    at tryCatcher (/usr/src/app/node_modules/bluebird/js/release/util.js:16:23)
    at /usr/src/app/node_modules/bluebird/js/release/catch_filter.js:17:41
    at tryCatcher (/usr/src/app/node_modules/bluebird/js/release/util.js:16:23)
    at Promise._settlePromiseFromHandler (/usr/src/app/node_modules/bluebird/js/release/promise.js:512:31)
    at Promise._settlePromise (/usr/src/app/node_modules/bluebird/js/release/promise.js:569:18)
    at Promise._settlePromise0 (/usr/src/app/node_modules/bluebird/js/release/promise.js:614:10)
    at Promise._settlePromises (/usr/src/app/node_modules/bluebird/js/release/promise.js:690:18)
    at _drainQueueStep (/usr/src/app/node_modules/bluebird/js/release/async.js:138:12)
    at _drainQueue (/usr/src/app/node_modules/bluebird/js/release/async.js:131:9)
    at Async._drainQueues (/usr/src/app/node_modules/bluebird/js/release/async.js:147:5)
    at Immediate.Async.drainQueues (/usr/src/app/node_modules/bluebird/js/release/async.js:17:14)
    at runCallback (timers.js:810:20)
    at tryOnImmediate (timers.js:768:5)
    at processImmediate [as _immediateCallback] (timers.js:745:5)

It’s very important to tell that I have different apps that run on the same K8s cluster with the same DB, same auth, and it’s not crashing.

Any suggestions?

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 15
  • Comments: 85 (19 by maintainers)

Commits related to this issue

Most upvoted comments

@demian85 I was seeing this error as well and it seems that in my case it happened when some sequelize calls within a transaction weren’t using the parent transaction. So something like:

sequelize.transaction(async transaction => {
  await Model.create({ name: 'Test' }, { transaction })
  await DifferentModel.findAll() // missing transaction
})

It seems that these calls blocked the transaction for larger amount of time and other transactions failed with SequelizeConnectionAcquireTimeoutError. I am not sure if this could be related though.

image 我这里同样是在k8s集群上出现这个问题,我把同一pods下面的另一个协同容器的cpu内存配置提升成和主容器一样的配置之后正常了。

提升配置只是延缓了报错的时机。我这个报错的真正原因是因为k8s会发送非常大流量的嗅探 head请求,因为我之前不恰当的使用方式使每一次head请求都会触发一次db的查询,一段时间后,就批量会报这个错误。我后面的处理方式是在express的app注册回调的最前面 app.head(‘/’, (req, res) => { res.send(“ok”) }); 直接拦截掉head请求,使之不会往下一步的回调走,解决了这个问题。

image 我这里同样是在k8s集群上出现这个问题,我把同一pods下面的另一个协同容器的cpu内存配置提升成和主容器一样的配置之后正常了。

I’m using the latest version and this creepy error still appears randomly. I’m also using the default config values. The worst part is that my express app completely crashes and stays in a “zombie mode” where I need to reset the server and start from scratch or nothing will ever work again. Any ideas?

@demian85 You’re one of the first persons I’ve seen that has been experiencing the same issue. Very accurate description.

Take a look at this: https://blog.logrocket.com/why-you-should-avoid-orms-with-examples-in-node-js-e0baab73fa5/

Not saying you should follow the headlines however I found it very insightful. Let me know what you think and if you have resolved the issue. I’ll be sure to do the same

@demian85 I couldn’t solve this error by any mean. After a lot of investigations I went back to the root of the problem. “Why without any changes this error happens now all the time?” … end after some research I noticed that I had this error because my volume of data increased. So probably this error is related to that, so I monitorized all my queries and found a very nasty one… I changed it and modify my entities accordingly… now everithing is fine, without any other change.

I don’t see how waiting 60 seconds instead of 30 seconds would be a solution. At least in my system even 2 seconds is a long time to wait. Besides, downgrading to pg 7.16.0 solved the issue.

We’re still monitoring this thread

SequelizeConnectionAcquireTimeoutError means all connections currently in the pool were in-use and a request to access a connection took more than 60 seconds (default timeout).

This can typically happen if there are too many requests being made and the pool is too small.

If the pool size is not the issue, unfortunately the stacktrace is not useful to us as it only lets us know that the pool is saturated, not why all connections are in-use.

Consider taking a look at what the connections are doing (if using postgres, pgadmin has a dashboard that displays that nicely). If they’re not doing anything (ilde), but sequelize’s pool says they are all in-use, then we likely have a bug in Sequelize.

If that is the case, please let us know which version of Sequelize you’re using, as well as the name of the database, its version, and the version of the npm library to use to connect to it

what do you mean by “optimized queries”? what does SQL optimization have to do with that error? I’ll try to find any transaction issues and report back.

On Tue, Nov 5, 2019 at 1:00 PM Iszlai Lorand-Edmond < notifications@github.com> wrote:

I had this issue as well, solve it by optimizing my queries… I tried all the “hot fixes” and configs that i found without any result, then I tackled the situation and rethink my queries and entities… now everything runs well without any config change.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/sequelize/sequelize/issues/10858?email_source=notifications&email_token=AADS4ZR6HRJP6QBER4R4ANTQSGKB3A5CNFSM4HI7TH6KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDDGETI#issuecomment-549872205, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADS4ZU4BLB54YBS5P52SQ3QSGKB3ANCNFSM4HI7TH6A .

Reopening indicates that there is something that needs to be fixed on our end. I don’t have any evidence of that currently. We’ll still monitor the thread but I prefer to keep the issue close unless we actively need to fix something.

  • If you use unmanaged transactions, you’re fully responsible for making sure they’re properly committed / rolled back. Managed transactions are much safer
  • If you run too many connections concurrently (such as by looping and running a query for 8mb worth of data), you’re going to run out of available connections for long enough to have a timeout. You need to batch your operations as much as possible, chunk the data, or queue your operations

All new comments I’ve seen so far fall in either of these two categories, which isn’t caused by Sequelize.

I would like to know if each insert\update which In perform within the loop would need a separate DB connection? if so, that’s unacceptable.

If you don’t await them then in the loop (e.g. collect all promises outside the loop then use Promise.all on them) yes it’s going to run all of these operations in parallel which is going to require a new connection for each of them. If you await each operation in the loop, it’s going to run in series. You want the first one for performance. If your number of operations is too big, you’ll run out of connections but you have a few options (it’s not a Node issue, just a parallelization issue. You’d have that in Java too, but Java would run these operations in series by default):

  • increase the maximum number of concurrent connections sequelize can open, and your DB can accept
  • use batch queries when possible to reduce the number of necessary queries
  • process your data in chunks instead of all at once
  • use a queue which only processes a few entries in parallel at a time

You’d have the same issue in Java if you tried to parallelize this work. If you don’t, then it’s going to run in series and will be very slow. It’s the same thing, just that Node makes it easier to parallelize by default.

Same issue here, solved by restart the server temporary, but need a solution.

Config:

const config: Options = {
  pool: {
    max: 5,
    min: 0,
    acquire: 60000,
    idle: 60000,
  },
};

log

Keep in mind that if you use new Sequelize({ pool: { max: 1 }, /* (...) */ }) (common config for AWS Lambda functions) and use Model.findAndCountAll(), you may experience SequelizeConnectionAcquireTimeoutError errors for expensive (latency-wise) queries.

See #12633 for more details.

I found at least one place in my code that could lead to this error. Anyone facing the same issue be sure that you commit or rollback all transactions first. In my case I forgot to commit a transaction in one of my if branches.

const transaction = await connection.transaction();
try {
  task = await resolveTask(key, { excludeMetadata, transaction, withProducts: true });
  if (task === null) {
    await transaction.commit();
    return new ApiResponse(API_RESPONSE_CODE_NOT_FOUND, 'Task is not found.');
  }
  if (task.status !== STATUS_PENDING && (task.status !== STATUS_DONE || task.aggregatedJobStatus !== STATUS_PAUSED)) {
    return Promise.reject('Task must be pending or done with paused jobs.');
  }
  task.priority = parsePriority(priority);
  await task.save({ transaction });
  await transaction.commit();
}
catch (e) {
  await transaction.rollback();
  if (typeof e === 'string') {
    return new ApiResponse(API_RESPONSE_CODE_INVALID, e);
  }
  throw e;
}

In order check whether some PostgreSQL transactions are idle one could execute the following SQL-query (geobase is database name in the example):

select * from pg_stat_activity where datname = 'geobase' and state = 'idle in transaction';

I am also facing this issue, error SequelizeConnectionAcquireTimeoutError appears and process is getting stuck, only restart fixed it until the next time.

Node 12.13.0 Sequelize 5.21.2 MSSQL

Any suggestions on how to fix this issue?

So after some investigation i found out that my problem was that one of my queries causes multiple dead lock in my MSSQL db, which eventually raises the SequelizeConnectionAcquireTimeoutError error.

I fixed the dead lock issue and SequelizeConnectionAcquireTimeoutError error was disappear without any configuration changes.

The patch has been released https://github.com/sequelize/sequelize/releases/tag/v6.20.1 Please let me know if you’re still experiencing the issue with this change 😃

I’ve applied this patch and not more timeout until now (7 days and conting)

thanks!

Make sure to roll back/commit your transaction. My case was initiating the transaction and in certain flow, I return a response without rolling back or committing my transaction.

The patch has been released https://github.com/sequelize/sequelize/releases/tag/v6.20.1 Please let me know if you’re still experiencing the issue with this change 😃

I can confirm that I managed to replicate the exact scenario I described above. I’ll write something that kills the connection if COMMIT TRANSACTION or ROLLBACK TRANSACTION errors (for any reason, be it a network issue, or something else).

Can’t guarantee this solves the whole issue, but it may be part of it for some people.

I had this issue as well, solve it by optimizing my queries… I tried all the “hot fixes” and configs that i found without any result, then I tackled the situation and rethink my queries and entities… now everything runs well without any config change.

@TrustyMan Does a larger acquire solve your issue?

And to reiterate

Since I know many people reading this thread are going to try and manually track the transaction object to pass it to each query. This would indeed be a correct solution however it’s error prone (especially in nested functions) and forgetting it in a single place can deadlock your database connections. (If you are holding a transaction connection, max your db connections and then try to open a new connection)

Please see here to learn how to add transactions to all queries inside a transaction block automatically: https://sequelize.org/master/manual/transactions.html#automatically-pass-transactions-to-all-queries

EDIT: And for learning purposes since this has gotten a lot of attention. I’m copying a minimal reproduction snippet of mine from another thread.

const sequelize = new Sequelize(database, user, password, { pool: { max: 1 } } );

sequelize.transaction(t => {
    return sequelize.query("SELECT 1+1 AS result")
});

This

  1. sets the connection pool to 1
  2. opens a transaction
  3. tries to execute another command which tries to open a second connection
  4. is unable to open any further connecttions and will fail with the error at the start of this thread (after X timeout)

This is not a bug but an expected behaviour of transactions. Sequelize makes this error a bit easy to make but I digress.

It seems that these calls blocked the transaction for larger amount of time and other transactions failed with SequelizeConnectionAcquireTimeoutError. I am not sure if this could be related though.

@demian85 I was seeing this error as well and it seems that in my case it happened when some sequelize calls within a transaction weren’t using the parent transaction. So something like:

sequelize.transaction(async transaction => {
  await Model.create({ name: 'Test' }, { transaction })
  await DifferentModel.findAll() // missing transaction
})

It seems that these calls blocked the transaction for larger amount of time and other transactions failed with SequelizeConnectionAcquireTimeoutError. I am not sure if this could be related though.

Created a bug for this: https://github.com/sequelize/sequelize/issues/11995

After hours of debugging, I found a pattern which is not easy to be found when using manual transaction commit and rollback in async/await:

const dbtx = await db.transaction();

if (_records_materials.length === 0) {
  return true; // but no commit or rollback yet
}

await dbtx.commit()

@jzavisek, finally realized that you were right. A forgot to pass my transaction to inner function and transaction with default identifier was created instead. In result my transaction was idle for 60 seconds and in final specified SequelizeConnectionAcquireTimeoutError was thrown. Now everything works for me, even after upgrade to pg@7.17.1.

@tapz, I faced the simlar issue few days ago but was upgrading pg from 7.14.0 to 7.15.1. So 7.17.0 is not an issue in my case for sure. Probably, it’s caused by some code refactoring I made earlier, so I have to check it first. Is it possible that some unclosed transaction leads to this error?

The same error.

ConnectionAcquireTimeout errors are notoriously difficult to debug because they are usually a symptom of various problems that are difficult to pin down

The documentation has a section about the error that can help point you in the right direction. It’s written for v7, but is relevant to v6 as well: https://sequelize.org/docs/v7/other-topics/connection-pool/#connectionacquiretimeouterror

If we enable CLS, is it possible to add a hook to report when the CLS automatically pass the transaction to our query?

This can help us catch coding bugs.

Thanks @saqib-ahmed for the tips though the same code was working fine with MySQL but since we moved to MariaDB, I am seeing these errors. Also, as the project has been live for more than a year with managed transactions, it would be a BIG change to update it to manual transactions. Will require a lot of time to re-test everything. I’ll see if the pool settings helps.

Thanks for your response @ephys. I’ll have a look at pgAdmin and let you know what I find.

@arbieo I think we should turn that on by default once we replace CLS with node’s built-in AsyncLocalStorage.

@dijonkitchen I think I had the same issue. I could stop having this error with one of the follow 3 methods :

  • increasing the pool configuration (allowing only 1 connection makes it break, allowing 100 makes it pass, or increasing acquire time makes it work with some waiting time)
  • getting the queries out of a transaction
  • unparalleling and processing the queries one by one

None of those seem entirely satisfying

  • increasing the config may allow the break to happen again
  • getting the query out of a transaction makes us lose the benefits of using a transaction in the first place
  • unparalleling makes things go slower

I don’t understand why it works outside a transaction. If it’s only a matter of concurrent connections, what does it change that the queries are inside a transaction ?

Since this is the top search result on google: sequelize has a way to automatically pass transactions to all your calls documented here:

https://sequelize.org/master/manual/transactions.html#automatically-pass-transactions-to-all-queries

So you can make your code shoot-yourself-in-the-foot proof

Neither downgrading nor increasing the timeout settings are ideal solutions, however, both solved the problem, so when someone finds this issue she can choose which way will be better for its proposal…

I get this issue during the development of a task that envolves several inserts (more than 20k) and I solved increasing the acquire attribute of the pool option (from 30 seconds to 60 seconds), my configuration below:

return new Sequelize(
    'DATABASE_URL',
    {
        logging: false,
        pool: {
            max: 30,
            min: 0,
            acquire: 60000,
            idle: 5000
        } 
    }
);

I don’t know if this is the same issue… but sequelize v5 is not workable for us at all at the moment. We are using MySql

 logging:console.log,
 pool: {
     handleDisconnects: true,
     max: 13,
     min: 1, 
     idle: 10000, 
     acquire: 20000 // i also tried 50000
 },
 dialectOptions: {
   ssl: 'Amazon RDS'
},

I turned on logging:console.log and for some reason now… Executing (default): SELECT 1+1 AS result is firing ALL over the place… its happening at least or two times between EVERY db call… and its getting logged 1000+ times eventually… even tho no database calls are executing… then it just dies with Operation Timeout and it comes to a halt.

sequelize.js:1187 Executing (default): SELECT 1+1 AS result
sequelize.js:1187 Executing (default): SELECT 1+1 AS result
sequelize.js:1187 Executing (default): SELECT 1+1 AS result
sequelize.js:1187 Executing (default): SELECT 1+1 AS result
sequelize.js:1187 Executing (default): SELECT 1+1 AS result
sequelize.js:1187 Executing (default): SELECT 1+1 AS result
.... 100's of times this is occurring ...
sequelize.js:1187 Executing (default): SELECT 1+1 AS result
sequelize.js:1187 Executing (default): SELECT 1+1 AS result
sequelize.js:1187 Executing (default): SELECT 1+1 AS result
sequelize.js:1187 Executing (default): SELECT 1+1 AS result
sequelize.js:1187 Executing (default): SELECT 1+1 AS result
sequelize.js:1187 Executing (default): SELECT 1+1 AS result
sequelize.js:1187 Executing (default): SELECT 1+1 AS result
sequelize.js:1187 Executing (default): SELECT 1+1 AS result
SequelizeConnectionAcquireTimeoutError: Operation timeout
    at pool.acquire.catch.error (/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:282:52)
    at <anonymous>

... server is pretty much dead now ...

UPDATE:: … whats extremely odd… is this SELECT 1+1 AS result log seems to be tied to the number of results that come back from my queries… for example… after my user search page… loads (which has a limit of 25)… the SELECT 1+1 AS result is logged 25 times… if I search for nate and it returns 3 users… it gets logged 3 times at the end…

UPDATE 2:: My issue was not really related… I had added this as part of the upgrade… however… it was happening whenever I grabbed my sequelize instance… instead of when it created it. (i feel dumb)…

    // check connection
    sequelize.authenticate().catch(err => {
      sentry.error(err);
    });