node-postgres: 7.18.2: "Connection terminated unexpectedly" when using client.query with a pool when pool has been idle for 10 minutes (running in AWS Lambda)

Code is running in a lambda. When we run the function below it runs fine. But if we run it and then wait 10 minutes then run it again, we get an error every time. We are querying Aurora Postgres on AWS.

Code:

const {Pool} = require('pg');

// Global Connection, can be re-used!!
const pgPool = new Pool({
    user: process.env.PG_USER,
    host: process.env.PG_HOST,
    database: process.env.PG_DATABASE,
    password: process.env.PG_PASSWORD,
    port: process.env.PG_PORT,
    max: process.env.MAX_CLIENTS
});

pgPool.on('error', (err, client) => {
    console.error('Unexpected error in Postgress connection pool', err);
});

async function performQuery(event) {

    let queryString = null;
    let args = null;

    try {
        // some setup code here...  
        const client = await pgPool.connect();
        try {
            const res = await client.query(queryString, args);
            return res.rows;
        } finally {
            client.release();
        }
    } catch (err) {
        console.error('Problem executing export query:');
        console.error(err); // <-- this line is in the log below  
        throw err;
    }
}

This is what I see in the cloudwatch logs:

{
    "errorType": "Error",
    "errorMessage": "Connection terminated unexpectedly",
    "stack": [
    "Error: Connection terminated unexpectedly",
    "    at Connection.<anonymous> (/var/task/node_modules/pg/lib/client.js:255:9)",
    "    at Object.onceWrapper (events.js:312:28)",
    "    at Connection.emit (events.js:223:5)",
    "    at Connection.EventEmitter.emit (domain.js:475:20)",
    "    at Socket.<anonymous> (/var/task/node_modules/pg/lib/connection.js:78:10)",
    "    at Socket.emit (events.js:223:5)",
    "    at Socket.EventEmitter.emit (domain.js:475:20)",
    "    at TCP.<anonymous> (net.js:664:12)"
]
}

I’ve tried a few variations on this but the constant is the 10 minutes and the use of the Pool. To me this code is almost identical to the code in https://node-postgres.com/features/pooling.

So far it looks like the problem has been solved by using a Client instead:

const client = new Client({
            user: process.env.PG_USER,
            host: process.env.PG_HOST,
            database: process.env.PG_DATABASE,
            password: process.env.PG_PASSWORD,
            port: process.env.PG_PORT
        });

        await client.connect();
        const res = await client.query(queryString, args);
        await client.end();
        return res.rows;

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Reactions: 20
  • Comments: 67 (10 by maintainers)

Commits related to this issue

Most upvoted comments

still a thing

People praise AWS Lambda, without knowing that it is the only hosting environment in the world that implements aggressive connection-recycle policy.

Nobody it seems is greedy enough as to inconvenience their clients so bad, to decide to drop live connections instead of what everyone else is doing - extending the IO capacity. It’s just dumb corporate greed, backed up by self-assured market dominance, to maximize profit by reducing the cost, without increasing the capacity.

That’s why issues like this one keep polluting the Internet. That’s why I do not use AWS Lambda.

If you are using AWS Lambda for long running processes you’re using it wrong.

On Thu, Feb 27, 2020 at 11:37 AM Vitaly Tomilov notifications@github.com wrote:

People praise AWS Lambda, without knowing that it is the only hosting environment in the world that executes aggressive connection drop policy.

Nobody it seems is greedy enough as to inconvenience their clients so bad, to decide to drop live connections instead of what everyone else is doing - extending the IO capacity. It’s just dumb corporate greed, backed up by self-assured market dominance.

That’s why issues like this one keep polluting the Internet. That’s why I do not use LWS Lambda.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/brianc/node-postgres/issues/2112?email_source=notifications&email_token=AAFMAAX2BYUDXURW6K6WBRTRFAI6VA5CNFSM4K2TYUC2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENFVN5Y#issuecomment-592140023, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFMAAUWJDDF3RNT73IQTHLRFAI6VANCNFSM4K2TYUCQ .

@brianc thanks for building such an important library. Your efforts are truly appreciated. Cheers!

Hi everyone, I’m coming from knex/knex#3636

I experienced this behaviour with Postgres + NLB setup. If I understand correctly this is not only a Lambda problem but just connection lost in general. NLB has this broken behaviour, it will just timeout after being idle for 350s, SILENTLY WITHOUT telling the client. At this point, knex/pg doesn’t know the connection already closed, so when the next query come-in knex/pg will try to connect with the previous connection, then realize the connection already broken, and will throw connection terminated unexpectedly error. This blog sums up this behaviour clearly.

If I’m not mistaken the PR #2121 put on hold because @brianc not familiar with Lambda and this thing were also hard to reproduce. But in my case, since NLB consistently timeout after 350s, it should be easier to reproduce.

Steps to reproduce:

  1. Put NLB in front of your Postgres server
  2. Connect your script to the NLB instead of to the Postgres server directly
  3. Make one success request/query, to trigger the pool
  4. Wait for 350+ seconds
  5. Make request/query again, you will see connection terminated unexpectedly error.

Oh in my case, psql also facing the same problem, which should not be a problem for an interactive tool like psql. But for knex/pg, my expectation is to retry internally when realized the connection already broken.

As for the .release(..) part: Javascript does not support try / catch / else syntax. So, expecting the developer to call .release(..) differently depending upon whether or not an exception was raised seems like a usability issue.

Yes it’s a bit tricky and hopefully when the pool interface gets improved it’ll be easier to use. That “pass an error to evict” logic for the pool has been for all the years I’ve used this driver. All my usage of the driver has a local wrapper that handles that logic so it’s in one place in each app, but you’re right that everybody kind of has to handle it themselves.

(Okay, fine. It’s technically possible, but it forces the developer to create/check didErrorOccur variables)

Yes that’s the logic to which I’m referring. Though rather than checking if an error occurred the usual approach is to have different release(...) calls based on whether the task resolved or rejected:

const pg = require('pg');
const pool = new Pool({ /* ... */ });

const withClient = async (task) => {
    const client = await pool.connect();
    try {
        const result = await task(client);
        // If we get here then the task was successful so assume the client is fine
        client.release();
        return result;
    } catch (err) {
        // If we get here then the task failed so assume the client is broken
        client.release(err);
        throw err;
    }
};

That logic is a bit over zealous as it’ll evict potentially salvageable client errors (ex: after a successful ROLLBACK we’d expect the client to be usable again) but it’s a decent default. If you expect to have lots of constraint violations that could thrash your pool you can make it a bit more intelligent to deal with that situation or have a transaction wrapper handle that.

In either case: the snippet’s error is actually triggered by the first call to client.query(..). So, although the detail you pointed out about client.release(..) appears to be true, I’m not sure if it really impacts this snippet in any way.

It’s important to evict broken connections or your pool will never heal and you’ll keep getting the same broken connections. Evicting them will force your pool to (eventually) create new working connections.

(Side note: the README.md for pg-pool doesn’t seem to mention this detail about the call to .release(..), but the code itself looks like it aligns w/ what you’re saying. https://github.com/brianc/node-postgres/tree/master/packages/pg-pool )

Yes we should fix that. The docs site (https://node-postgres.com/api/pool#releaseCallback) has the right usage but the README must be really out of date.

The main concept is: it’s possible for Clients that are already in the Pool to lose their Connections. It’s the Pool’s responsibility to double-check the validity of a Client before handing it off to a caller. Based upon the issue reported here, it looks like there might be a corner case where this is not occurring.

The end usage always needs to deal with broken connections as even if the pool tests the connection, it’s a race condition between the pool testing the connection and the end usage actually doing some work with the connection:

  1. Client asks to check out connection
  2. Pool tests connection and verifies it’s okay
  3. Connection is killed
  4. Client receives connection from pool

(Or, alternatively: perhaps several people missed this detail about .release(..), and everybody has been inserting disconnected Clients back into the pool???)

That’s possible. For something like Lambda that can freeze your app and effectively kill any open sockets in the background, I’d suggest not using a pool at all if it’s an option. Or you can add your own wrapper that repeatedly tries to fetch and test the connection prior to giving it to the rest of your code. Also, setting the idleTimeoutMillis on the pool to something lower than the Lambda freeze-after-inactive-seconds should limit the problem a bit as you’ll be less likely to have any clients int he pool when the Lambda gets frozen.

that’s perfect I’ll get it looked at soon!

@brianc : Here’s a snippet that seems to confirm my suspicion:

const {Pool} = require('pg');

function createPool() {
  return new Pool({
      user: process.env.PG_USER,
      host: process.env.PG_HOST,
      database: process.env.PG_DATABASE,
      password: process.env.PG_PASSWORD,
      port: process.env.PG_PORT,
      max: process.env.MAX_CLIENTS
  });
}

async function run() {
  const pool = createPool();

  const c1 = await pool.connect();
  // Place the Client back into the pool...
  await c1.release();

  // Now intentionally end the lower-level Connection while Client is already in the pool
  c1.connection.end();

  // Attempt to obtain the connection again...
  const c2 = await pool.connect();
  try {
    // Explodes
    const res = await c2.query("select * from accounts", null);
    console.log("Yay");
  } catch(err) {
    console.log("Initial error")
    console.log("----------------")
    console.log(err);
  } finally {
    c2.release();
  }


  // Attempt to obtain the connection again...
  const c3 = await pool.connect();
  try {
    // Surprisingly, it succeeds this time.  This is because the pool was already
    // "fixed" thanks to the 'error' event that it overheard.
    const res = await c3.query("select * from accounts", null);
    console.log("Yay");
  } catch(err) {
    console.log("Second error")
    console.log("----------------")
    console.log(err);
  } finally {
    c3.release();
  }
}

run();

Sample output:

Initial error
----------------
Error: write after end
    at writeAfterEnd (_stream_writable.js:236:12)
    at Socket.Writable.write (_stream_writable.js:287:5)
    at Socket.write (net.js:711:40)
    at Connection.query (/Users/brianlauber/Documents/src/knex/node_modules/pg/lib/connection.js:234:15)
    at Query.submit (/Users/brianlauber/Documents/src/knex/node_modules/pg/lib/query.js:164:16)
    at Client._pulseQueryQueue (/Users/brianlauber/Documents/src/knex/node_modules/pg/lib/client.js:446:43)
    at Client.query (/Users/brianlauber/Documents/src/knex/node_modules/pg/lib/client.js:541:8)
    at run (/Users/brianlauber/Documents/src/knex/oops.js:43:26)
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:189:7)
(node:57118) UnhandledPromiseRejectionWarning: Error: write after end
    at writeAfterEnd (_stream_writable.js:236:12)
    at Socket.Writable.write (_stream_writable.js:287:5)
    at Socket.write (net.js:711:40)
    at Connection.query (/Users/brianlauber/Documents/src/knex/node_modules/pg/lib/connection.js:234:15)
    at Query.submit (/Users/brianlauber/Documents/src/knex/node_modules/pg/lib/query.js:164:16)
    at Client._pulseQueryQueue (/Users/brianlauber/Documents/src/knex/node_modules/pg/lib/client.js:446:43)
    at Client.query (/Users/brianlauber/Documents/src/knex/node_modules/pg/lib/client.js:541:8)
    at run (/Users/brianlauber/Documents/src/knex/oops.js:43:26)
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:189:7)
(node:57118) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:57118) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Yay

Hi, the issue with connection still seems to be appearing in the latest version: 8.11.3

I have encountered something very similar to what was described above and managed to create some simple code snippet with the problem. Maybe this will help with resolving the issue.

I’m just using pg-pool to connect to local postgres and insert simple json object. When the object contains bigint the JSON.stringify in the pg lib fails, but the connection is already established. Here is what I’m doing.

Simple table to store the jsonb.

create table test (
    value jsonb
);

JS inserting the objects:

async function insertUsingPool() {
    // following query works fine
    await pgPool.query(
        `INSERT INTO test (value) VALUES ($1)`,
        [{'test' : 1}]
    ).catch((e) => { console.error(e) });

    // this fails with error: TypeError: Do not know how to serialize a BigInt
    await pgPool.query(
        `INSERT INTO test (value) VALUES ($1)`,
        [{'test' : 2n}]
    ).catch((e) => { console.error(e) });

    // this fails with error: Error: Connection terminated unexpectedly
    await pgPool.query(
        `INSERT INTO test (value) VALUES ($1)`,
        [{'test' : 3}]
    ).catch((e) => { console.error(e) });
}
async function insertUsingClientPool() {
    const pgPoolClient = await pgPool.connect();

    // following query works fine
    await pgPoolClient.query(
        `INSERT INTO test (value) VALUES ($1)`,
        [{'test' : 4}]
    ).catch((e) => { console.error(e) });

    // this throws exception and hangs... TypeError: Do not know how to serialize a BigInt
    await pgPoolClient.query(
        `INSERT INTO test (value) VALUES ($1)`,
        [{'test' : 5n}]
    ).catch((e) => { console.error(e) });

    // not even executed...
    await pgPoolClient.query(
        `INSERT INTO test (value) VALUES ($1)`,
        [{'test' : 6}]
    ).catch((e) => { console.error(e) });
}

After running the insertUsingClientPool function, the test table contains only two objects "{""test"": 1}", "{""test"": 4}" and the process just hangs. pg_stat_activity shows there is an active connection ongoing with the insert query.

I don’t know, maybe I’m doing something stupid and not using the pg lib correctly, but maybe the JSON.stringify should be called before establishing the connection?

I think I got to the bottom of this with at least one particular case.

This is only applicable to Aurora Serverless v1, and maybe v2. I will test v2 later and report back.

Here’s what happens:

A scaling event is triggered.

After ~ 2m the scaling event is triggered (cluster status = scaling-capacity), RDS runs this query:

-- 1: long string, looks like base64 encoded, but not
-- 2: The value of `SecondsBeforeTimeout` from `ScalingConfigurationInfo`
-- 3: Not sure what this is, seems to be always `2`

select * from aurora_serverless_prepare_seamless_scaling('long string', 60000, 2)

This query is blocking until all user queries are finished running.

As soon as the last query ends, it will drop all of the active connections within milliseconds.

I have tested this by running select pg_sleep(60). As soon as the query would return, TablePlus would throw:

Query 1 ERROR: SSL connection has been closed unexpectedly

So what actually happens in the Lambda environment?

It’s a race condition!

  1. Cluster entered a scaling-capacity state.
  2. But the aurora_serverless_prepare_seamless_scaling query has not started yet. Remember - it runs in ~ 2 minutes.
  3. Lambda gets invoked and an instance of Pool is created
  4. Maybe some database activity happens for a while (> 2m), let’s say it’s a semi-long job Lambda, not a web request.
  5. The function aurora_serverless_prepare_seamless_scaling is now patiently waiting for its moment to pounce 🐈
  6. All of the transactions and queries are finished for whatever reason (e.g. maybe they run in a loop: fetch data -> save data), and there is a few millisecond window of opportunity.
  7. But the Pool object is still alive and well, and Lambda is still running, just not querying the database.
  8. aurora_serverless_prepare_seamless_scaling triggers scaling
  9. All connections drop. 💥
Error: Connection terminated unexpectedly
    at Connection.<anonymous> (/node_modules/.pnpm/pg@8.11.2/node_modules/pg/lib/client.js:132:73)
    at Object.onceWrapper (node:events:627:28)
    at Connection.emit (node:events:513:28)
    at Socket.<anonymous> (/node_modules/.pnpm/pg@8.11.2/node_modules/pg/lib/connection.js:63:12)
    at Socket.emit (node:events:513:28)
    at TCP.<anonymous> (node:net:301:12)

I have corroborated this with a massive amount of Log Insight queries and a CloudWatch dashboard.

screenshot-20230807T210341-U33T7rBu@2x


  • Error Count = number of errors in the log at the given time. The value is hard to see, but it’s at 20.
  • Scaling Events = Cluster uptime (seconds). This is the recommended metric to measure scaling 🤷🏼

screenshot-20230807T210433-1M9p42Zl@2x


screenshot-20230807T210930-oZ9Mqz9j@2x

Any insight about this? I am facing the same problem, my app throws an error when a connection sits iddle for a long time (using a connection pool, no AWS lambda)

if your app isn’t in lambda then its not related to this issue. If you can write a self-contained script to reproduce the problem, please feel free to open a new issue. Note: in node, keeping connections open for long periods of time to backends will often result in your app throwing an error if/when the backend disconnects unexpectedly. You need to add `

pool.on('error', () => {
  console.log('something bad happend')
  // re-initialize pool here
})

or some other kind of error handling.

I’d suggest instead of that, make sure your idleTimeoutMillis is set to something resonable like 30 seconds. Then after 30 seconds of your app sitting idle the clients will disconnect.

We’ve just moved the database connection inside the Lambda handler function, and removed it from the state which is going to be re-used for other Lambda invocations. This might be a tiny little performance hit since we need to create a new DB connection for each Lambda invocation but mitigates the issue for now.

I’ll stick around and try to support you guys nevertheless ✌️

@brianc : Here’s a variation on the earlier script that provides some more evidence around the issue:

const {Pool} = require('pg');

function delay(t) {
  return new Promise(function(resolve) {
    setTimeout(resolve, t);
  });
}


function createPool() {
  return new Pool({
      user: process.env.PG_USER,
      host: process.env.PG_HOST,
      database: process.env.PG_DATABASE,
      password: process.env.PG_PASSWORD,
      port: process.env.PG_PORT,
      max: process.env.MAX_CLIENTS
  });
}


async function run() {
  const pool = createPool();

  async function doStuff(label) {
    const client = await pool.connect();
    try {
      console.log(`${label}: START`)
      const res = await client.query("select 1", null);
      console.log(`${label}: SUCCEEDED`);
    } catch(err) {
      console.log(`${label}: ERROR  ->  ${err}`)
    } finally {
      client.release();
    }
  }

  const c1 = await pool.connect();
  await c1.query("select 1", null);
  await c1.release();


  // !!!
  // FYI: If you comment out this event handler, then the script
  //      will fail 100% of the time.  (Unhandled 'error' event)
  pool.on('error', function() {
    console.log("Pool handled error cleanly!")
  })


  // c1 is in the pool.  We're intentionally ending the
  // lower-level connection.
  c1.connection.stream.end();


  // !!!
  // FYI: With a 1 second delay, everything appears to work fine.
  //      However, if you reduce this delay too much, then you'll
  //      observe the errors that were reported.
  console.log("Delaying...");
  await delay(1000);
  console.log("Finished delay");


  await doStuff("ONE");
  await doStuff("TWO");
}

run();

There are 2 key things you can play around with here:

  1. Removing the pool.on('error', ...) expression (which will cause the script to explode)
  2. Adjusting / removing the delay. (Errors will begin to occur when the delay is set too low)

Even with a delay of 1 second, the problem seems to disappear. So, now I’m unsure if this script is accurately recreating the scenario described by this issue.

For anyone else still struggling with this I think I may have found a resolution.

It seems to be largely an infrastructure provider issue as others have mentioned, AWS has a common practice of setting a 350 second timeout on idle TCP connections, Azure and GCP have different timeouts but similar issues.

From a cursory search ELB and NAT Gateway were the culprits for me.

The keepAlive flag implemented in node-pg follows the right setup from what I can see but it could be that the OS level and node aren’t playing nice as the default timer for linux seems to be 7200.

I ended up resolving by setting up pgbouncer and adding a keepAlive timer of 300 to the config

pgbouncer.ini

tcp_keepalive = 1
tcp_keepidle = 300

Thank you all for helping me through the day

My issue was that the Aurora reader I was connecting to was rebooting from falling behind the writer too much. Figured I’d post in case someone else was running into this.

I’m impressed with how quickly this is being looked at

I’ve been trying to work on pg at least 5 days a week this year. I’ll be looking at this first thing tomorrow morning. I’m not 100% sure that making the pool evict closed clients will fix the actual specific issue in lamdas but…it’s good behavior to have either way.

@brianc : So, it looks like the "pg" pool has some logic for removing idle clients; however, I suspect that it does not get invoked unless the underlying Client instance actually emits an "error". For ex:

https://github.com/brianc/node-postgres/blob/1c8b6b93cfa108a0ad0e0940f1bb26ecd101087b/packages/pg-pool/index.js#L216

However, it looks like the Client doesn’t necessarily emit the "error" event when it detects that the connection has been closed:

https://github.com/brianc/node-postgres/blob/1c8b6b93cfa108a0ad0e0940f1bb26ecd101087b/packages/pg/lib/client.js#L252-L279

So, if this analysis is correct, then it means there is a way for the Pool to contain Client instances that have already been disconnected. (Or rather: their underlying connection instance has already been ended)

@jcollum : My guess is that the issue isn’t actually “fixed”, but rather “avoided”. As in: whenever your code needs to perform a query, it is:

  1. Establishing a new connection (via the freshly-created Client instance)
  2. Performing the query
  3. Ending the connection cleanly. (by invoking client.end() )

BTW: it looks like there is a connection leak in that snippet. It should probably do this:

try {
// trimmed code here 

        const client = new Client({
            user: process.env.PG_USER,
            host: process.env.PG_HOST,
            database: process.env.PG_DATABASE,
            password: process.env.PG_PASSWORD,
            port: process.env.PG_PORT
        });

        await client.connect();
        try {
          const res = await client.query(queryString, args);
          return res.rows;
        } finally {
          await client.end();
        }

    } catch (err) {
        console.error('Problem executing export query:');
        console.error(err);
        throw err;
    }

This will ensure that the connection will be closed even if an error occurs.

@jamesdixon : Howdy! Just to clarify: what I meant was that Knex did not appear to be directly related to the pg-specific control path. However, I suspect that the same lower-level issue is affecting both "pg" and "mysql".

My guess is that either the server or the network is aggressively closing the connections. This is then causing the client-side libraries to “get confused” when they discover that the connection is already closed. (At least, that appears to be the case for "pg". The Client class checks the this._ending flag to see whether or not it was expecting the connection to be closed)

It looks like it’s related to the lambda environment or something and not pg. According to this it’s hitting both pg & mysql…which makes it somewhat unlikely to be caused by this library. I’m not sure there’s much I can do w/o having steps to reproduce, unfortunately. Looking at @jcollum’s example it looks like it’s somehow related to the lambda going idle and killing the open connections to the db. Weirdly if you only run the lambda once every 10 minutes that should be well outside the idle timeout and the pool should have closed all it’s connections. Anyways I can make lots of assumptions about what’s happening but w/o steps to reproduce I’m not gonna be able to provide much concrete support. I’ll follow along if more info comes up I’ll look into it. Also: PRs are always welcome for issues like this if you figure out what it is! 😉

@jamesdixon : Ah – I was just about to point you to this thread! lol

@jcollum this is also happening for us and a number of others (see this thread https://github.com/knex/knex/issues/3636).

We’re seeing this in Node 10 / 12 in the Lambda environment. Only solution at the moment was to run a custom Node 8 runtime for our Lambda functions.