node-mssql: ResourceRequest timed out

Node: 6.9.1 configuration:

pool: {
    max: 100,
    min: 1,
    idleTimeoutMillis: 15000
}

I am getting this error when there is less traffic or there is gap between requests.

error:  { TimeoutError: ResourceRequest timed out
        at ResourceRequest._fireTimeout (/home/munir/node-workspace/server/node_modules/generic-pool/lib/ResourceRequest.js:62:17)
        at Timeout.bound (/home/munir/node-workspace/server/node_modules/generic-pool/lib/ResourceRequest.js:8:15)
        at ontimeout (timers.js:365:14)
        at tryOnTimeout (timers.js:237:5)
        at Timer.listOnTimeout (timers.js:207:5) name: 'TimeoutError' }

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 13
  • Comments: 74 (7 by maintainers)

Commits related to this issue

Most upvoted comments

If you’re using transactions in your calls it seems to be an issue. Improper usage of them makes pool get stacked. Try to get rid off them.

I’m also getting this error. This affects our production environment rather badly every other 2 days (and the we have to restart our service). Is there any more news on this?

there was an alleged fix for this but i still get the error 😦

  1. I added some code on your if statement

It shouldn’t matter if the connectionPool is connected or not because it could be in the process of connecting when it’s called and you’ll still be creating many pools that you lose reference to and thus can’t close…

perhaps you should just un-assign the variable in the catch block?

@dhensby Thank you for your advice! I have agreed with you. I changed my code.

const getConnectionPool = async () => {
	try {
+		if (connectionPool) return
		connectionPool = await new sql.ConnectionPool(dbConfig).connect()
		connectionPool.on("error", async err => {
			await closeConnectionPool()
			onDbError(err, "getConnectionPool error")
		})
	} catch (err) {
+		connectionPool = null
		onDbError(err, "getConnectionPool error in catch")
	}
}

const closeConnectionPool = async () => {
	try {
		await connectionPool.close()
	} catch (err) {
+		connectionPool = null
		onDbError(err, "closeConnectionPool error")
	}
}

And so far, there is no connection problem in my app I am keeping an eye for 4 days!

@willmorgan

Here is the stack trace.

[2018-07-25T13:02:50.259] [ERROR] default - Database connection pool error { error: 
   { ConnectionError: Failed to connect to 1.1.1.1:1433 in 15000ms
    at Connection.tedious.once.err (C:\projects\host\node_modules\mssql\lib\tedious.js:216:17)
    at Object.onceWrapper (events.js:255:19)
    at Connection.emit (events.js:160:13)
    at Connection.connectTimeout (C:\projects\host\node_modules\tedious\lib\connection.js:936:12)
    at Timeout._onTimeout (C:\projects\host\node_modules\tedious\lib\connection.js:906:16)
    at ontimeout (timers.js:458:11)
    at tryOnTimeout (timers.js:296:5)
    at Timer.listOnTimeout (timers.js:259:5)
     code: 'ETIMEOUT',
     originalError: 
      { ConnectionError: Failed to connect to 1.1.1.1:1433 in 15000ms
    at ConnectionError (C:\projects\host\node_modules\tedious\lib\errors.js:12:12)
    at Connection.connectTimeout (C:\projects\host\node_modules\tedious\lib\connection.js:936:28)
    at Timeout._onTimeout (C:\projects\host\node_modules\tedious\lib\connection.js:906:16)
    at ontimeout (timers.js:458:11)
    at tryOnTimeout (timers.js:296:5)
    at Timer.listOnTimeout (timers.js:259:5)
        message: 'Failed to connect to 1.1.1.1:1433 in 15000ms',
        code: 'ETIMEOUT' },
     name: 'ConnectionError' } }

I believe I was able to isolate the issue. Try adding this to your connection configuration:

{
    pool: {
        acquireTimeoutMillis: 0
    }
}

Awaiting your feedback. Thank you!

I had a really hard time on my customer with this too. Managed to reduce the damage somehow with the config below, but its urgent to fix this one. pool: { max: 100000, min: 1, idleTimeoutMillis: 50, evictionRunIntervalMillis: 5, softIdleTimeoutMillis: 5 }

@dhensby Finally I got a same error Failed to connect to 165.246.17.29:3361 in 15000ms after 6 days running. No too many connections were in there. So I edited my code. When error occured, the app would be restarted by process.exit() without any trying to reconnect on DB.

  1. I added some code on your if statement

It shouldn’t matter if the connectionPool is connected or not because it could be in the process of connecting when it’s called and you’ll still be creating many pools that you lose reference to and thus can’t close…

perhaps you should just un-assign the variable in the catch block?

@dhensby Thank you for your guide! I changed code as you recommended. And now 3 different version apps is running on my each server.

  1. Old one(always reassigning connectionPool)
  2. New one as you recommended
  3. Another one(Getting connection once at starting app)

Until now, the number of active connections on the all servers are stable.

I’m keeping an eye out for that.

P.S.

  1. I added some code on your if statement
const getConnectionPool = async () => {
	try {
+               if (connectionPool && connectionPool.connected) return
		connectionPool = await new sql.ConnectionPool(dbConfig).connect()
		connectionPool.on("error", async err => {
			await closeConnectionPool()
			onDbError(err, "getConnectionPool error")
		})
	} catch (err) {
		onDbError(err, "getConnectionPool error in catch")
	}
}
  1. I used below query to check the number of active connections
SELECT PROGRAM_NAME, HOSTNAME, STATUS, COUNT(DBID) AS NumberOfConnections
FROM sys.sysprocesses
WHERE PROGRAM_NAME = 'node-mssql'
GROUP BY PROGRAM_NAME, HOSTNAME, STATUS

Thanks, though I wasn’t looking for an explanation on why poolCreate is called when it’s called, i wanted to understand what is it that you fixed by adding that reject in there.

I managed to “hackfix” it so that it recovers by replacing the whole connection pool when there are errors. I had to do it in a timeout, since doing it immediately would somehow result in no on error handler attached and issueing unhandled rejections. Though this will lose previous issued queries. If someone would want to keep those, a queue would have to sit between the repo method and the connection pool.

const sql = require('mssql');

class Repo {
    constructor(cfg) {
        const self = this;

        self.config = cfg;
        self.timerNewConn = null;
        self.connectionPool = self.newConn();
    }

    newConn() {
        const self = this;

        const pool = new sql.ConnectionPool(self.config);
        pool.on('error', err => {
            console.error('onPoolError listener');
            //console.error(err);

            if (self.timerNewConn) {
                clearTimeout(self.timerNewConn);
            }

            self.timerNewConn = setTimeout(() => {
                pool.close().then(() => self.connectionPool = self.newConn());
            }, 7000)
        });

        return pool.connect(err => {
            if (err) {
                console.error('onPoolConnectError listener');
                console.error(err);

                if (self.timerNewConn) {
                    clearTimeout(self.timerNewConn);
                }

                self.timerNewConn = setTimeout(() => {
                    pool.close().then(() => self.connectionPool = self.newConn());
                }, 7000)
            }
        });
    }

    async names() {
        const pool = await this.connectionPool;

        const queryRes = await pool
            .request()
            .query(
                'select name from names'
            );

        return queryRes['recordset'].map(r => r.name);
    }
}

function connCfg() {
    const user = 'sa';
    const password = 'P@55w0rd';
    const server = 'localhost';
    const database = 'test';

    return {user, password, server, database/*, debug: true*/, pool: {
            min: 1,
            max: 5,
            acquireTimeoutMillis: 5000,
            evictionRunIntervalMillis: 500
        }};
}

module.exports = (
    intervalSeconds,
    repo = new Repo(connCfg())
) => {
    const main = () => run().catch(console.error);
    main();
    setInterval(
        main,
        intervalSeconds * 1000
    );

    async function run() {
        console.log("next");

        await repo.names()
            .then(res => console.log(`names res: ${res}`))
            .catch(err => console.error(`names err: ${err.stack}`));
    }
};

@dsbert That’s interesting. I haven’t seen an ETIMEOUT error crash our Node services yet but I’ll keep an eye out for that. Do you have a stack trace you can share? If there’s a defect in my PR then would be keen to fix…

@willmorgan

I’m not sure if this is the same problem that you are trying to fix, but after running on your code for a while, I eventually hit an ETIMEOUT error from the connection pool. This caused the app to get stuck until I restart it. The only work-around I have been able to come up with is to kill the app if any error is emitted by the connection pool object. It looks like when an error occurs in the connection pool when running a request, the request is never rejected. Here is an example.

pool.on("error", (err) => {
  // ETIMEOUT error occurs
});

try {
  const results = await pool
    .request()
    .input("message", sql.NVarChar(50), message.substr(0, 50))
    .output("result", sql.NVarChar(1000))
    .execute("db.procedure");
  return results;
} catch (err) {
  // this never gets hit
}

One thing I noticed is that we don’t reject the factory create promise on error; we just emit an error event:

https://github.com/tediousjs/node-mssql/blob/master/lib/tedious.js#L245-L252

Going to try rejecting on error, in tandem with running the explicit timeout code (from @jacqt ), and see if that solves the problem for us: https://github.com/tediousjs/node-mssql/compare/master...DigitalIdentification:pool-fix?expand=1

I also noticed that node-pool swallows all errors which is going to make any sort of timeout / unexpected error extremely difficult to notice and diagnose.

I’ll need that stack because I have no idea what else should cause the issue. Maybe I don’t understand the code of node-pool correctly.

Any help with that is appreciated.

Update: @patriksimek, the munir131 fix doesn’t work after all. It actually masked an underlying memory leak issue that I tracked down to not properly committing or rolling back transactions that had been opened. That “fix” also produces the error “Resource not currently part of this pool”.

If you are using transactions make sure that your commit or rollback under any condition whether those be exceptions or your own logic decision tree. Don’t just simply return from a function otherwise the connection associated with that transaction will be left “open” and never get returned to the pool.

Here is an example of what caused my memory leak and ultimately the “ResourceRequest timed out” error:

export default async (req: any, args: any, res: any) => {
  const logger = req.logger;
  const connectionPool = req.connectionPool;
  const transaction = new sql.Transaction(connectionPool);

  try {
    await transaction.begin(); // we started a transaction

    const decodeResult = decode(args.customerId.value);
    if (decodeResult.error) {
      httpUtils.setError(res, new Error(decodeResult.error));
      return; // this just caused a memory leak! our connection will never go back into the pool!! rollback or commit the transaction before returning from the method always. eventually you will run out of connections in your pool
    }
    
    ... // more logic
  }
  catch (e) {
    await handleRollback(transaction, logger); // good, we rolled back during unhandled conditions
    httpUtils.setError(res, e);
  }
}

In addition I forked this repo and updated the references to generic-pool to 3.4.2. I haven’t seen the “ResourceRequest timed out” error since then. I believe the owners of generic-pool have made change regarding this same error message on their end unrelated to my memory leak detailed above.


Original: I’ve tried all the posted “fixes” here and none of them work EXCEPT for @munir131 fork at https://github.com/munir131/node-mssql.

To install latest use this command:

npm i https://github.com/munir131/node-mssql/tarball/e539a725eed2161e7d0f4cd0b0eda05680b8564c -S

You can install specific commits by replacing the hash after the tarball with any commit-id found at: https://github.com/munir131/node-mssql/commits/master

The issue for us was that that the _poolCreate will sometimes randomly return a promise that never resolves when under high load. https://github.com/tediousjs/node-mssql/blob/dee9086569ea007d632813a1f209e6b2a716c2e3/lib/tedious.js#L176

This is a problem because of the way the generic node-pool counts how many TDS connections are potentially available. The generic pool adds the number of currently pending promises that the _poolCreate method returns to the number of potentially available TDS connections - see https://github.com/coopernurse/node-pool/blob/e73247728432f2a681f87b3228ccf10ebe67d5aa/lib/Pool.js#L629

As a result, whenever the _poolCreate method fails to return a promise that resolves, the generic node-pool will over-estimate the number of potentially available TDS connections, and will wait indefinitely for the promise returned by the _poolCreate to resolve. (https://github.com/coopernurse/node-pool/blob/e73247728432f2a681f87b3228ccf10ebe67d5aa/lib/Pool.js#L218)

The simple fix for this was to just add the following lines to the tedious.js file:

https://github.com/tediousjs/node-mssql/compare/master...jacqt:master

I don’t think this is the best way to encode this logic but perhaps this will be helpful to other people in this thread

Well, at least on my customer server it seems to have reduced the problem.