knex: TimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?

Environment

Knex version: 0.15.2 Database: sqlite3 OS: Ubuntu 18

I’m getting lots of errors like TimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?

Any ideas? All my code lives in this one file: https://github.com/umaar/wiki-globe/blob/master/index.js - that code powers this globe which I posted in a Hacker News comment and in turn led to the discovery of such error.

An example query looks like this:

await knex('edits').insert([{
	field1: JSON.stringify(data),
	field2: 'field2'
}]);

Does sqlite3 only allow a single connection to it because it’s file-based? I’ll try adding .transacting() calls to see if that makes a difference.

I guess it comes down to: some technical limitation with sqlite3 (maybe lack of pooling?) or some badly written code DB/queries on my part! Any insight you have would be much appreciated, thanks!

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 6
  • Comments: 56 (9 by maintainers)

Commits related to this issue

Most upvoted comments

I solved this problem with these versions:

"knex": "^0.21.1",
"objection": "^2.1.3",
"pg": "^8.0.3"

@batadamnjanovic Thank you so much! I’ve been pulling my hair trying to understand what the hell happened. Upgrading to the latest pg did the trick.

I’ve had same problem with pg connection. Took me a while to figure out that with node v14.9.0, pg version ^8.0.3 is required. I’ve had pg version ^7.4.1, which caused problems

life saving comment.

I’ve had same problem with pg connection. Took me a while to figure out that with node v14.9.0, pg version ^8.0.3 is required. I’ve had pg version ^7.4.1, which caused problems

@joaotanaca Good way to avoid that error is to use implicit commit/rollback style of transactions

await knex.transaction(async trx => {
  // use trx here... if handler rejects, transaction will be rolled back if it returns, transaction will be automatically committed
});

+1 having this issue too

It is a fact that this error can be caused by very many issues, today I found out a new one the hard way after scrolling up and down countless threads like this one to no avail.

when setting up the pool, there knex allows us to optionally register afterCreate callback, if this callback is added it is imperative that you make the call to the done callback that is passed as the last parameter to your registered callback or else no connection will be acquired leading to timeout.

.....
    pool: {
      afterCreate: (conn, done) => {
        // .... add logic here ....
        // you must call with new connection
        done(null, conn);
      },
    }
.....

Knex version: 0.15.2 Database: MySQL 8 OS: MacOSX

I’m having the same connection issue when I run “knex migrate:latest”, and have scoured google with no luck. I’ve tested connections between Node and MySQL with a simple SQL script and that’s connecting fine. Initially there was an authentication issue with the connection which I was able to resolve by altering the user privileges with ‘mysql_native_password’ which has to be done in MySQL 8 to be able to support legacy connections I believe. However I was unable to get Knex the connection going.

Does Knex support MySQL 8? The examples in the documentation show MySQL 5.6 and there is no explicit mention of MySQL 8 support.

I know this can be anything, so here’s all the code I have for this project.

knexfile.js

require('dotenv').config();

module.exports = {

  development: {
    client: 'mysql2',
    connection: {
      port: process.env.db_port,
      host: process.env.db_host,
      database: process.env.db_name,
      user: process.env.db_user,
      password: process.env.db_key,
      connectTimeout: 90000
    },
    debug: true,
    pool: {
      min: 1,
      max: 20,
    },
    migrations: {
      directory: __dirname + '/knex/migrations',
    },
    seeds: {
      directory: __dirname + '/knex/seeds',
    },
  }
};

knex.js

require('dotenv').config();
const env = process.env.environment || 'development'
const config = require('../knexfile.js')[env];
const knex = require('knex')(config);

module.exports = knex;

server.js

require('dotenv').config();
const express = require('express');
const bodyParser = require("body-parser");
const path = require('path');
const PORT = process.env.db_port || 3002;
const knex = require('../knex/knex.js');

const app = express();

app.use((req, res, next) => {
  res.header("Access-Control-Allow-Origin", "*");
  res.header(
    "Access-Control-Allow-Headers",
    "Origin, X-Requested-With, Content-Type, Accept"
  );
  res.header("Access-Control-Allow-Methods", "GET,PUT,POST,DELETE");
  next();
});

app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.use(express.static(`${__dirname}/../client/dist`, { maxAge: "365d" }));



app.listen(PORT, () => {
  console.log(`listening on port: ${PORT}`);
});

migration file

exports.up = function(knex, Promise) {
  return knex.schema.createTable('sdcapstone', (t) => {
    t.increments('_id').primary().unique();
    t.string('name').notNullable();
    t.integer('rating');
    t.integer('reviewCount');
    t.integer('itemNum').notNullable();
    t.integer('price').notNullable();
    t.string('mainImage');
    t.json('images');
  })
};

exports.down = function(knex, Promise) {
  return knex.schema.dropTable('sdcapstone');
};

Here is the result of a console log of “knex” from knex.js

{ [Function: knex]
  Promise: 
   { [Function: Promise]
     TypeError: [Function: TypeError],
     RangeError: [Function: RangeError],
     CancellationError: [Function: SubError],
     TimeoutError: [Function: SubError],
     OperationalError: [Function: OperationalError],
     RejectionError: [Function: OperationalError],
     AggregateError: [Function: SubError],
     _peekContext: [Function],
     onPossiblyUnhandledRejection: [Function],
     onUnhandledRejectionHandled: [Function],
     longStackTraces: [Function],
     hasLongStackTraces: [Function],
     config: [Function],
     getNewLibraryCopy: [Function],
     is: [Function],
     fromCallback: [Function],
     fromNode: [Function],
     all: [Function],
     cast: [Function],
     fulfilled: [Function],
     resolve: [Function],
     rejected: [Function],
     reject: [Function],
     setScheduler: [Function],
     pending: [Function],
     defer: [Function],
     method: [Function],
     try: [Function],
     attempt: [Function],
     bind: [Function],
     PromiseInspection: [Function: PromiseInspection],
     join: [Function],
     Promise: [Circular],
     version: '3.5.2',
     map: [Function],
     using: [Function],
     delay: [Function],
     coroutine: { [Function] addYieldHandler: [Function] },
     spawn: [Function],
     promisify: [Function],
     promisifyAll: [Function],
     props: [Function],
     race: [Function],
     reduce: [Function],
     settle: [Function],
     some: [Function],
     _SomePromiseArray: [Function: SomePromiseArray],
     filter: [Function],
     each: [Function],
     mapSeries: [Function: PromiseMapSeries],
     any: [Function],
     noConflict: [Function: noConflict] },
  queryBuilder: [Function: queryBuilder],
  raw: [Function: raw],
  batchInsert: [Function: batchInsert],
  transaction: [Function: transaction],
  initialize: [Function: initialize],
  destroy: [Function: destroy],
  ref: [Function: ref],
  domain: null,
  _events: {},
  _eventsCount: 0,
  _maxListeners: undefined,
  setMaxListeners: [Function: setMaxListeners],
  getMaxListeners: [Function: getMaxListeners],
  emit: [Function: emit],
  addListener: [Function: addListener],
  on: [Function: addListener],
  prependListener: [Function: prependListener],
  once: [Function: once],
  prependOnceListener: [Function: prependOnceListener],
  removeListener: [Function: removeListener],
  removeAllListeners: [Function: removeAllListeners],
  listeners: [Function: listeners],
  listenerCount: [Function: listenerCount],
  eventNames: [Function: eventNames],
  with: [Function],
  select: [Function],
  as: [Function],
  columns: [Function],
  column: [Function],
  from: [Function],
  fromJS: [Function],
  into: [Function],
  withSchema: [Function],
  table: [Function],
  distinct: [Function],
  join: [Function],
  joinRaw: [Function],
  innerJoin: [Function],
  leftJoin: [Function],
  leftOuterJoin: [Function],
  rightJoin: [Function],
  rightOuterJoin: [Function],
  outerJoin: [Function],
  fullOuterJoin: [Function],
  crossJoin: [Function],
  where: [Function],
  andWhere: [Function],
  orWhere: [Function],
  whereNot: [Function],
  orWhereNot: [Function],
  whereRaw: [Function],
  whereWrapped: [Function],
  havingWrapped: [Function],
  orWhereRaw: [Function],
  whereExists: [Function],
  orWhereExists: [Function],
  whereNotExists: [Function],
  orWhereNotExists: [Function],
  whereIn: [Function],
  orWhereIn: [Function],
  whereNotIn: [Function],
  orWhereNotIn: [Function],
  whereNull: [Function],
  orWhereNull: [Function],
  whereNotNull: [Function],
  orWhereNotNull: [Function],
  whereBetween: [Function],
  whereNotBetween: [Function],
  andWhereBetween: [Function],
  andWhereNotBetween: [Function],
  orWhereBetween: [Function],
  orWhereNotBetween: [Function],
  groupBy: [Function],
  groupByRaw: [Function],
  orderBy: [Function],
  orderByRaw: [Function],
  union: [Function],
  unionAll: [Function],
  having: [Function],
  havingRaw: [Function],
  orHaving: [Function],
  orHavingRaw: [Function],
  offset: [Function],
  limit: [Function],
  count: [Function],
  countDistinct: [Function],
  min: [Function],
  max: [Function],
  sum: [Function],
  sumDistinct: [Function],
  avg: [Function],
  avgDistinct: [Function],
  increment: [Function],
  decrement: [Function],
  first: [Function],
  debug: [Function],
  pluck: [Function],
  clearSelect: [Function],
  clearWhere: [Function],
  clearOrder: [Function],
  insert: [Function],
  update: [Function],
  returning: [Function],
  del: [Function],
  delete: [Function],
  truncate: [Function],
  transacting: [Function],
  connection: [Function],
  client: 
   Client_MySQL {
     config: 
      { client: 'mysql',
        connection: [Object],
        debug: true,
        pool: [Object],
        migrations: [Object],
        seeds: [Object],
        log: [Object] },
     logger: 
      Logger {
        _debug: [Function: debug],
        _warn: [Function: warn],
        _error: [Function: error],
        _deprecate: [Function: deprecate] },
     connectionSettings: 
      { port: '3001',
        host: 'localhost',
        database: redacted,
        user: redacted,
        password: redacted },
     driver: 
      { createConnection: [Function: createConnection],
        createPool: [Function: createPool],
        createPoolCluster: [Function: createPoolCluster],
        createQuery: [Function: createQuery],
        escape: [Function: escape],
        escapeId: [Function: escapeId],
        format: [Function: format],
        raw: [Function: raw] },
     pool: 
      Pool {
        creator: [Function: create],
        destroyer: [Function: destroy],
        validate: [Function: validate],
        log: [Function],
        acquireTimeoutMillis: 60000,
        createTimeoutMillis: 30000,
        idleTimeoutMillis: 30000,
        reapIntervalMillis: 1000,
        createRetryIntervalMillis: 200,
        propagateCreateError: true,
        min: 0,
        max: 20,
        used: [],
        free: [],
        pendingCreates: [],
        pendingAcquires: [],
        destroyed: false,
        interval: null },
     valueForUndefined: 
      Raw {
        client: [Circular],
        sql: 'DEFAULT',
        bindings: undefined,
        _wrappedBefore: undefined,
        _wrappedAfter: undefined,
        _debug: true },
     _events: 
      { start: [Function],
        query: [Function],
        'query-error': [Function],
        'query-response': [Function] },
     _eventsCount: 4,
     makeKnex: [Function: makeKnex] } }

I solved this problem with these versions:

"knex": "^0.21.1",
"objection": "^2.1.3",
"pg": "^8.0.3"

@batadamnjanovic Thank you so much, after 1 week nightmare… finally you saved me.

From the documentation:

The client created by the configuration initializes a connection pool, using the generic-pool library. This connection pool has a default setting of a min: 2, max: 10 for the MySQL and PG libraries, and a single connection for sqlite3 (due to issues with utilizing multiple connections on a single file). To change the config settings for the pool, pass a pool option as one of the keys in the initialize block.

So if you’re using sqlite3 you will run into issues with more than 1 connection.

@evanrosa We have been running our web application on port 8080 with CloudRun for several years and have the same problem. Could you please tell me why changing the port to 1337 would stop this problem?

This feeling when you think you’ll spend the entire day trying to fix such a damn issue, but you find the right spot and the right people that ease the pain in a flash 😃

Resolvi este problema com estas versões:

"knex": "^0.21.1",
"objection": "^2.1.3",
"pg": "^8.0.3"

Comigo funcionou atualizar as versoes

FWIW, I was getting the same error after upgrading from Node 10 to Node 14 with pg 7.12.1. Upgrading to pg 8.3.3 fixed it.

for me it was a silly error, I ended up not putting the await trx.commit() before the return

@ashanker2 I fixed problem by using config.host = /cloudsql/${process.env.INSTANCE_CONNECTION_NAME};

and not ip&port. GAE mounts tcp/unix socket to a container when you have configured Cloud SQL instance. There may be a problem if db is in a different region than service that uses it and you may need to use this beta_settings-flag in your .yaml like in the docs.

@tahv0 there are 100 different reasons why that might happen, so you need to find out how to reproduce the problem. Probably by then you will also see why that is happening.

For me this was Google Cloud specific problem, because one can not take connection from GAE to Google SQL with public IP. I fixed problem by reading some more docs.