node-postgres: Connection timing out on aws lambda

I am trying to connect to a postgres db from an aws lambda function using the below code

var dbConfig = { host: {endpoint}, user: {dbuser}, database: {dbname}, password: {password}, port: {port} };

var pg = require('pg');
var client = new pg.Client(dbConfig);

exports.handler = function(event, context, callback) {
  client.connect(function (err) {
      client.end();
      context.succeed(context);
  });
};

If I run this via node locally it works fine and returns within a second or two. In lambda it times out when allotted up to 60 seconds

the lambda timeout error message is

{
  "errorMessage": "2016-12-12T17:18:18.617Z f8406e80-c08e-11e6-a0d3-8bc89c563312 Task timed out after 60.00 seconds"
}

The db I am trying to get to is on amazon rds and I am using the endpoint they provided me as the host on config.

Any thoughts on why this might be failing to connect?

EDIT: I upped the lambda execute time to 3 mins.

It finally failed after 128 seconds with:

{
  "errorMessage": "connect ETIMEDOUT 35.165.66.116:5432",
  "errorType": "Error",
  "stackTrace": [
    "Object.exports._errnoException (util.js:870:11)",
    "exports._exceptionWithHostPort (util.js:893:20)",
    "TCPConnectWrap.afterConnect [as oncomplete] (net.js:1062:14)"
  ]
}

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 23 (3 by maintainers)

Most upvoted comments

@vitaly-t Actually it is relevant to the library, and it’s getting a bit annoying hearing that it isn’t. All Node Postgres libraries assume that they will be running on long-lived processes, whereas on Lambda that is explicitly NOT the case.

@Andrew-Max You’re having this problem because the connection to PGP doesn’t get closed, and you will have to do it yourself:

var pgp = require('pgp')();

pgp.any(query)
  .then((response) => {
    pgp.end();
    return response;
  })
  .catch((error) => {
    pgp.end();
    throw error;
  });

Sorry, I don’t mean to be rude, its just that your package is one of the few libraries of decent quality on NPM, and it was super frustrating because it worked in a dev environment but not on Lambda.

So, I just ran into this and my understanding is that even though on your lambda you get to call the final callback and expect it to respond, some custom code on the lambda implementation is checking if there are any handlers (like these open database connections) in the event loop before assuming your function is “done”.

With the context object passed into the handler function, setting context.callbackWaitsForEmptyEventLoop = false makes everything behave as expected for me. The pool is left on the top scope with open connections and the functions “finish”.

the api is here, https://github.com/brianc/node-pg-pool#plays-nice-with-asyncawait if you pass in a true to release, it would also destroy the client so your lambda process won’t keep it. Though if you are having heavy traffic, regardless if you are using pool or not, you might exhaust all your database memory due to the concurrent clients your server needs to handle.

client.release(true);

I am also running into this issue as well. I am not sure if I’m setting up my VPC correctly.

It seems strange to me that running the code locally works, but on Lambda does not. So I am not sure if it’s a VPC issue because logically I would think it’ll prevent a remote IP from accessing it vs lambda that’s on the cloud next door…

I am curious if others are having this issue, or is this mostly solved. Can anyone provide a working example?

Thanks in advance.


Editted: Finally got it working. It was indeed a VPC issue.

// Generated by CoffeeScript 2.2.4
var Client;

({Client} = require('pg'));

exports.handler = async function(event, context) {
  var client;
  client = new Client();
  await client.connect();
  return 'connected';
};

Also callback style:

// Generated by CoffeeScript 2.2.4
var Client;

({Client} = require('pg'));

exports.handler = function(event, context, callback) {
  var client;
  //context.callbackWaitsForEmptyEventLoop = false
  client = new Client();
  return client.connect(function(err) {
    client.end();
    if (err) {
      return callback(err);
    } else {
      return callback(null, 'connected');
    }
  });
};

The way my VPC is setup (though there might be a better way):

  • Lambda & RDS same VPC, security group & subnets
  • Had to manually add an incoming-rule to the Security Group See screenshot (This was the last thing I had to do before it worked)

99

https://blog.shikisoft.com/running-aws-lambda-in-vpc-accessing-rds/ - more in depth tutorial how to set up VPC.

Ran into the same issue, and after a lot of digging figured out that I had a combination of two issues: 1, the event loop isn’t empty unless you end the pool which kind of defeats the purpose of using a pool as @charmander said. The solution to this is to add context.callbackWaitsForEmptyEventLoop = false or to use a client and destroy it before the callback. The second issue is that the lambda needs access to the RDS’s vpc. Weirdly enough, it can “connect” even without having access, but it gets blocked on queries. Thanks everyone in this thread for sharing their info.

It about your access to RDS from Lambda, you may want to check these things first based on your set up:

  • RDS is accessible publicly and your Lambda has internet access
  • If your RDS is in VPC then your Lambda needs to be to have access or in that VPC
  • Each time lambda containers are created, it is under a subnet, make sure your subnet under the VPC has access to the internet or other VPC resources.

I ran into something akin to this, and figured I’d comment after getting it worked out, rather than file a separate ticket.

I had to connect from a lambda running inside a VPC. If I ran the lambda on a subnet that could not hit the wide open internet, the client.connect call would timeout. If I ran the lambda on a subnet that had internet access, the connect call worked fine.

I don’t know the reason, and am not digging into it as I don’t have time (if any maintainers of the code want to weigh in I’d be interested though). But, basically, it looks like part of the client.connect call requires being able to talk to the outside internet. If that’s a known requirement, it’d be nice to document.

Also, setting callbackWaitsForEmptyEventLoop to false is not a solution, because then you need to place await on every async operation as AWS could freeze Lambda in a middle of that operation. For example, when you want to put the data into database but the result is not necessary for you and you want to do something else in a meantime.

IIUC, AWS will not freeze the Lamda if you haven’t returned yet. It only freezes when it thinks it’s idle.

You can do multiple things concurrently but if the work for your Lamda involves doing some DB interaction you can’t pretend you completed early if it’s not done. There’s no guarantee your Lamda would continue running.

Without setting it to true, you need to remember every promise and wait for its resolution manually

You have to do that anyway if you actually care about success or failure. That’s not even specific to Lamdas. At the very least you should be handling and ignoring rejections as it could crash your process in the future:

$ node -e 'Promise.reject(new Error())'
(node:9978) UnhandledPromiseRejectionWarning: Error
    at [eval]:1:16
    at ContextifyScript.Script.runInThisContext (vm.js:50:33)
    at Object.runInThisContext (vm.js:139:38)
    at Object.<anonymous> ([eval]-wrapper:6:22)
    at Module._compile (module.js:653:30)
    at evalScript (bootstrap_node.js:479:27)
    at startup (bootstrap_node.js:180:9)
    at bootstrap_node.js:625:3
(node:9978) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:9978) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Best to embrace the async nature of Node.js, await any promises along the way, and have your final handler return a Promise that resolves or rejects with the result.

Only pg specific advice here is to not use a Pool as it’ll get broken if the Lamda is frozen / thawed due to any outstanding TCP connections being dropped. Use a Client directly and do await client.end() before you indicate that your Lamda is complete. Or if you want something Pool-like, try something like https://github.com/brianc/node-postgres/issues/1938.

Hello folks, I am running into this issue and I’d like to understand why context.callbackWaitsForEmptyEventLoop = false is solving. If it is true that a new Client does not create a pool, then once .end() is called things should be all good.

Also, I see no .destroy() other than this on the client.

Am I missing something?