node-mssql: ResourceRequest timed out
Node: 6.9.1 configuration:
pool: {
max: 100,
min: 1,
idleTimeoutMillis: 15000
}
I am getting this error when there is less traffic or there is gap between requests.
error: { TimeoutError: ResourceRequest timed out
at ResourceRequest._fireTimeout (/home/munir/node-workspace/server/node_modules/generic-pool/lib/ResourceRequest.js:62:17)
at Timeout.bound (/home/munir/node-workspace/server/node_modules/generic-pool/lib/ResourceRequest.js:8:15)
at ontimeout (timers.js:365:14)
at tryOnTimeout (timers.js:237:5)
at Timer.listOnTimeout (timers.js:207:5) name: 'TimeoutError' }
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 13
- Comments: 74 (7 by maintainers)
Commits related to this issue
- - added "acquireTimeoutMillis: 0" as it seems to fix a connection pool error - added status indicator to show "Connected" state - added possibility to allow sql query being inserted via msg object ins... — committed to W0w3/node-red-contrib-mssql-ps by deleted user 6 years ago
- Update package.json upgrade mssql dependency to 4.2.1. mssql package has been updated to resolve an issue related to resource timeout error. Loopback connector is dependent on this package and can fa... — committed to learnmews/loopback-connector-mssql by learnmews 6 years ago
If you’re using transactions in your calls it seems to be an issue. Improper usage of them makes pool get stacked. Try to get rid off them.
I’m also getting this error. This affects our production environment rather badly every other 2 days (and the we have to restart our service). Is there any more news on this?
there was an alleged fix for this but i still get the error 😦
@dhensby Thank you for your advice! I have agreed with you. I changed my code.
And so far, there is no connection problem in my app I am keeping an eye for 4 days!
@willmorgan
Here is the stack trace.
I believe I was able to isolate the issue. Try adding this to your connection configuration:
Awaiting your feedback. Thank you!
I had a really hard time on my customer with this too. Managed to reduce the damage somehow with the config below, but its urgent to fix this one.
pool: { max: 100000, min: 1, idleTimeoutMillis: 50, evictionRunIntervalMillis: 5, softIdleTimeoutMillis: 5 }@dhensby Finally I got a same error
Failed to connect to 165.246.17.29:3361 in 15000msafter 6 days running. No too many connections were in there. So I edited my code. When error occured, the app would be restarted byprocess.exit()without any trying to reconnect on DB.It shouldn’t matter if the connectionPool is connected or not because it could be in the process of connecting when it’s called and you’ll still be creating many pools that you lose reference to and thus can’t close…
perhaps you should just un-assign the variable in the catch block?
@dhensby Thank you for your guide! I changed code as you recommended. And now 3 different version apps is running on my each server.
connectionPool)Until now, the number of active connections on the all servers are stable.
I’m keeping an eye out for that.
P.S.
Thanks, though I wasn’t looking for an explanation on why poolCreate is called when it’s called, i wanted to understand what is it that you fixed by adding that reject in there.
I managed to “hackfix” it so that it recovers by replacing the whole connection pool when there are errors. I had to do it in a timeout, since doing it immediately would somehow result in no on error handler attached and issueing unhandled rejections. Though this will lose previous issued queries. If someone would want to keep those, a queue would have to sit between the repo method and the connection pool.
@dsbert That’s interesting. I haven’t seen an
ETIMEOUTerror crash our Node services yet but I’ll keep an eye out for that. Do you have a stack trace you can share? If there’s a defect in my PR then would be keen to fix…@willmorgan
I’m not sure if this is the same problem that you are trying to fix, but after running on your code for a while, I eventually hit an
ETIMEOUTerror from the connection pool. This caused the app to get stuck until I restart it. The only work-around I have been able to come up with is to kill the app if any error is emitted by the connection pool object. It looks like when an error occurs in the connection pool when running a request, the request is never rejected. Here is an example.One thing I noticed is that we don’t reject the factory create promise on error; we just emit an error event:
https://github.com/tediousjs/node-mssql/blob/master/lib/tedious.js#L245-L252
Going to try rejecting on error, in tandem with running the explicit timeout code (from @jacqt ), and see if that solves the problem for us: https://github.com/tediousjs/node-mssql/compare/master...DigitalIdentification:pool-fix?expand=1
I also noticed that node-pool swallows all errors which is going to make any sort of timeout / unexpected error extremely difficult to notice and diagnose.
I’ll need that stack because I have no idea what else should cause the issue. Maybe I don’t understand the code of node-pool correctly.
ttlparameter - https://github.com/coopernurse/node-pool/blob/71fc5582712dc5982d2b3987b84f9fbc93fe8501/lib/ResourceRequest.js#L27ttlparameter is linked to the pool’sacquireTimeoutMillisparameter.acquireTimeoutMillisfrom the defaults in 4.2 - https://github.com/tediousjs/node-mssql/commit/b353b31af320927a591275479bd837e361f54205#diff-86b75ecfbd25aa771c8e9e0dff8eb00fAny help with that is appreciated.
Update: @patriksimek, the munir131 fix doesn’t work after all. It actually masked an underlying memory leak issue that I tracked down to not properly committing or rolling back transactions that had been opened. That “fix” also produces the error “Resource not currently part of this pool”.
If you are using transactions make sure that your commit or rollback under any condition whether those be exceptions or your own logic decision tree. Don’t just simply return from a function otherwise the connection associated with that transaction will be left “open” and never get returned to the pool.
Here is an example of what caused my memory leak and ultimately the “ResourceRequest timed out” error:
In addition I forked this repo and updated the references to generic-pool to 3.4.2. I haven’t seen the “ResourceRequest timed out” error since then. I believe the owners of generic-pool have made change regarding this same error message on their end unrelated to my memory leak detailed above.
Original: I’ve tried all the posted “fixes” here and none of them work EXCEPT for @munir131 fork at https://github.com/munir131/node-mssql.
To install latest use this command:
You can install specific commits by replacing the hash after the tarball with any commit-id found at: https://github.com/munir131/node-mssql/commits/master
The issue for us was that that the
_poolCreatewill sometimes randomly return a promise that never resolves when under high load. https://github.com/tediousjs/node-mssql/blob/dee9086569ea007d632813a1f209e6b2a716c2e3/lib/tedious.js#L176This is a problem because of the way the generic node-pool counts how many TDS connections are potentially available. The generic pool adds the number of currently pending promises that the
_poolCreatemethod returns to the number of potentially available TDS connections - see https://github.com/coopernurse/node-pool/blob/e73247728432f2a681f87b3228ccf10ebe67d5aa/lib/Pool.js#L629As a result, whenever the
_poolCreatemethod fails to return a promise that resolves, the generic node-pool will over-estimate the number of potentially available TDS connections, and will wait indefinitely for the promise returned by the_poolCreateto resolve. (https://github.com/coopernurse/node-pool/blob/e73247728432f2a681f87b3228ccf10ebe67d5aa/lib/Pool.js#L218)The simple fix for this was to just add the following lines to the tedious.js file:
https://github.com/tediousjs/node-mssql/compare/master...jacqt:master
I don’t think this is the best way to encode this logic but perhaps this will be helpful to other people in this thread
@catcher-in-the-try @labs20 It is working fine for me after I used https://github.com/munir131/node-mssql
Well, at least on my customer server it seems to have reduced the problem.