serverless-mysql: Error: Connection lost: The server closed the connection.

Hi @jeremydaly ,

I use AWS Lambda, Aurora Serverless RDS. Error: Connection lost: The server closed the connection. occurs sometimes.

I can’t find any patterns…

If I don’t use await mysql.end(), do you think will it be ok? Please reply or any suggestion ! Thx !

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 31 (6 by maintainers)

Commits related to this issue

Most upvoted comments

Is there a race condition here? I was able to replicate the error

Connection lost: The server closed the connection

using an express server; by killing the process via MySQL console. The error was caught correctly by the library and the connection reset. Could the error be caused by one thread killing a connection whilst another thread is about to use it? Thread 1: Request received, Connection 1 created, Query done. Cleanup done. Thread 1 zombieMaxTimeout exceeded. Thread 2: Request received, Connection 2 created, Query done. Thread 1: Request received, wakes up, gets its existing DB connection. Thread 2: Cleans up and kills thread 1’s MySQL process id. Thread 1: Attempts to query on a killed MySQL connection.

I’ve seen this issue come up a lot both before and after the upgrade from node 8, so I don’t think it is node version related (although it might be).

In my case there are AWS lambda functions that are called about 2-3 times per hour. That’s enough for AWS to keep them “warm”, but their connections’ idle time easily passes zombieMaxTimeout and these connections get killed by other, more frequently invoked, lambdas that use this library. On the next invocation, the rarely invoked (but still warm) lambda only realizes its connection was killed when it tries a query() and it gets this fabled PROTOCOL_CONNECTION_LOST error.

I patched serverless-mysql locally to retry queries on connection lost, PR here in case anyone wants it: https://github.com/jeremydaly/serverless-mysql/pull/68

~Also seeing this error on node 10.x trying to connect to RDS within a VPC~

Setting our lambda’s security group as an inbound source for the RDS security group resolved this issue for us.

I got the same error too:

{
    "errorType": "Error",
    "errorMessage": "Connection lost: The server closed the connection.",
    "code": "PROTOCOL_CONNECTION_LOST",
    "fatal": true,
    "stack": [
        "Error: Connection lost: The server closed the connection.",
        "    at Protocol.end (webpack:////Users//Desktop/dev/code//node_modules/mysql/lib/protocol/Protocol.js?:112:13)",
        "    at Socket.eval (webpack:////Users//Desktop/dev/code//node_modules/mysql/lib/Connection.js?:97:28)",
        "    at Socket.eval (webpack:////Users//Desktop/dev/code//node_modules/mysql/lib/Connection.js?:525:10)",
        "    at Socket.emit (events.js:215:7)",
        "    at Socket.EventEmitter.emit (domain.js:476:20)",
        "    at endReadableNT (_stream_readable.js:1183:12)",
        "    at processTicksAndRejections (internal/process/task_queues.js:80:21)",
        "    --------------------",
        "    at Protocol._enqueue (webpack:////Users//Desktop/dev/code//node_modules/mysql/lib/protocol/Protocol.js?:144:48)",
        "    at Connection.query (webpack:////Users//Desktop/dev/code//node_modules/mysql/lib/Connection.js?:201:25)",
        "    at eval (webpack:////Users//Desktop/dev/code//node_modules/serverless-mysql/index.js?:188:25)",
        "    at new Promise (<anonymous>)",
        "    at Object.query (webpack:////Users//Desktop/dev/code//node_modules/serverless-mysql/index.js?:184:12)"
    ]
}

It happens when we have multiples queries done at the same time. We have a GraphQL resolver inside a lambda and some field are also resolved via this lambda (called via appsync) so we could have something like 700 queries in the same time to get this error.

I implemented the patch but still getting the issue. see aws log

Screen Shot 2020-02-04 at 12 32 42 pm

Anyone else in the same boat 🚣‍♀️ ?

Marking this as resolved with @itrendtrekhunt’s patch.

I’ve been having a hard time replicating this, but I’ve slotted more time tomorrow to get this resolved.