node-ldapjs: ECONNRESET errors when idling
I’m trying to figure out the source of the error below, or how to catch it properly and mitigate it. I am using LDAPJS as part of an identity management pipeline to move user accounts from our ERP into Active Directory (connected over TLS with a self-signed cert). There is often idle periods in the database where no events are being dispatched, and in turn LDAPJS raises these exceptions when I assume it has nothing to do.
12:20:54 bsis-0 Error: read ECONNRESET
at exports._errnoException (util.js:812:11)
at TLSWrap.onread (net.js:542:26)
2015-11-03 12:20:54: App name:bsis id:0 exited with code 1
12:20:54 PM2 App name:bsis id:0 exited with code 1
2015-11-03 12:21:54: Starting execution sequence in -fork mode- for app name:bsis id:0
12:21:54 PM2 Starting execution sequence in -fork mode- for app name:bsis id:0
2015-11-03 12:21:54: App name:bsis id:0 online
12:21:54 PM2 App name:bsis id:0 online
12:21:54 bsis-0 WARNING: NODE_APP_INSTANCE value of '0' did not match any instance config file names.
12:21:54 bsis-0 WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
12:37:54 bsis-0 Error: read ECONNRESET
at exports._errnoException (util.js:812:11)
at TLSWrap.onread (net.js:542:26)
2015-11-03 12:37:54: App name:bsis id:0 exited with code 1
12:37:54 PM2 App name:bsis id:0 exited with code 1
2015-11-03 12:38:54: Starting execution sequence in -fork mode- for app name:bsis id:0
12:38:54 PM2 Starting execution sequence in -fork mode- for app name:bsis id:0
2015-11-03 12:38:54: App name:bsis id:0 online
12:38:54 PM2 App name:bsis id:0 online
12:38:54 bsis-0 WARNING: NODE_APP_INSTANCE value of '0' did not match any instance config file names.
12:38:54 bsis-0 WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
12:54:55 bsis-0 Error: read ECONNRESET
at exports._errnoException (util.js:812:11)
at TLSWrap.onread (net.js:542:26)
2015-11-03 12:54:55: App name:bsis id:0 exited with code 1
12:54:55 PM2 App name:bsis id:0 exited with code 1
They only show up because of my process.on('uncaughtException')
handler which in turns logs the event and then auto-restarts the broker.
Ideas?
About this issue
- Original URL
- State: closed
- Created 9 years ago
- Comments: 31 (4 by maintainers)
👍
Thanks very much for doing the digging @tapmodo.
reconnect: true
does indeed auto-reconnect after this failure. This strikes me as a really important resilience feature so is a bit surprising it’s not documented. You do still need theclient.on('error', ...)
handler to stop the disconnection from bringing the process down first, as @pfmooney pointed out.So an over-simplified solution looks like:
Job done - thanks guys!
As noted above, the documentation for the Client API doesn’t mention auto-reconnecting or how to handle an eventual ECONNRESET from the server. I can’t imagine an LDAP server that never closes or resets the connection, or never goes down. Appears to be option
reconnect: true
?The documentation also does not mention how to close a connection, which is necessary if you are writing a small script that should gracefully shut down after doing its work. I looked in the source and appears this can be done with
client.destroy()
Both of these points would be useful to mention in the documentation. Thanks!
I’ve solved the problem removing the client definition out of the main node loop. So I’m building the client just when it’s necessary and after the call I destroy it.
+1 What’s the proper way to close a client? Proper way to reconnect? Those sound like essential points that need to be documented please. Thanks.
I recently experimented with setting up my own LDAP connection pool using the npm package
pool2
, and so far it’s going very well. After weeks of testing, we rolled the update to production with no incidents.Basically I have the pool give me connections and then automatically unbind them if they age out thus completely avoiding the idle timeout issue that I experience with Active Directory.
I have never found that
reconnect: true
worked, and so prior to this I would just process.exit(0) if I got an idle timeout forcing my app worker to re-generate and pick up a fresh new connection.This issue was opened a year ago, has this changed?
edit:
My current workaround is simply this:
Once in a while I’ll get a
read ECONNRESET
error, but the client is still usable and seems to re-create connections as they are needed.destroy calls unbind internally
@dustinsmith1024 - I’ve been having
ECONNRESET
errors as well. I’ve tried addingreconnect:true
option, and usingclient.destroy()
one minute after every query. This only started happening after I migrated from an AD on an Azure server to one on an AWS server, so I think it has some thing to do with what @pfmooney mentioned - the server is timing out idle connections. I just can’t get the error to be handled any where.