node-postgres: Connection timing out on aws lambda
I am trying to connect to a postgres db from an aws lambda function using the below code
var dbConfig = { host: {endpoint}, user: {dbuser}, database: {dbname}, password: {password}, port: {port} };
var pg = require('pg');
var client = new pg.Client(dbConfig);
exports.handler = function(event, context, callback) {
client.connect(function (err) {
client.end();
context.succeed(context);
});
};
If I run this via node locally it works fine and returns within a second or two. In lambda it times out when allotted up to 60 seconds
the lambda timeout error message is
{
"errorMessage": "2016-12-12T17:18:18.617Z f8406e80-c08e-11e6-a0d3-8bc89c563312 Task timed out after 60.00 seconds"
}
The db I am trying to get to is on amazon rds and I am using the endpoint they provided me as the host on config.
Any thoughts on why this might be failing to connect?
EDIT: I upped the lambda execute time to 3 mins.
It finally failed after 128 seconds with:
{
"errorMessage": "connect ETIMEDOUT 35.165.66.116:5432",
"errorType": "Error",
"stackTrace": [
"Object.exports._errnoException (util.js:870:11)",
"exports._exceptionWithHostPort (util.js:893:20)",
"TCPConnectWrap.afterConnect [as oncomplete] (net.js:1062:14)"
]
}
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 23 (3 by maintainers)
@vitaly-t Actually it is relevant to the library, and it’s getting a bit annoying hearing that it isn’t. All Node Postgres libraries assume that they will be running on long-lived processes, whereas on Lambda that is explicitly NOT the case.
@Andrew-Max You’re having this problem because the connection to PGP doesn’t get closed, and you will have to do it yourself:
Sorry, I don’t mean to be rude, its just that your package is one of the few libraries of decent quality on NPM, and it was super frustrating because it worked in a dev environment but not on Lambda.
So, I just ran into this and my understanding is that even though on your lambda you get to call the final callback and expect it to respond, some custom code on the lambda implementation is checking if there are any handlers (like these open database connections) in the event loop before assuming your function is “done”.
With the context object passed into the handler function, setting
context.callbackWaitsForEmptyEventLoop = false
makes everything behave as expected for me. The pool is left on the top scope with open connections and the functions “finish”.the api is here, https://github.com/brianc/node-pg-pool#plays-nice-with-asyncawait if you pass in a true to release, it would also destroy the client so your lambda process won’t keep it. Though if you are having heavy traffic, regardless if you are using pool or not, you might exhaust all your database memory due to the concurrent clients your server needs to handle.
I am also running into this issue as well. I am not sure if I’m setting up my VPC correctly.
It seems strange to me that running the code locally works, but on Lambda does not. So I am not sure if it’s a VPC issue because logically I would think it’ll prevent a remote IP from accessing it vs lambda that’s on the cloud next door…
I am curious if others are having this issue, or is this mostly solved. Can anyone provide a working example?
Thanks in advance.
Editted: Finally got it working. It was indeed a VPC issue.
Also callback style:
The way my VPC is setup (though there might be a better way):
https://blog.shikisoft.com/running-aws-lambda-in-vpc-accessing-rds/ - more in depth tutorial how to set up VPC.
Ran into the same issue, and after a lot of digging figured out that I had a combination of two issues: 1, the event loop isn’t empty unless you end the pool which kind of defeats the purpose of using a pool as @charmander said. The solution to this is to add
context.callbackWaitsForEmptyEventLoop = false
or to use a client and destroy it before the callback. The second issue is that the lambda needs access to the RDS’s vpc. Weirdly enough, it can “connect” even without having access, but it gets blocked on queries. Thanks everyone in this thread for sharing their info.It about your access to RDS from Lambda, you may want to check these things first based on your set up:
I ran into something akin to this, and figured I’d comment after getting it worked out, rather than file a separate ticket.
I had to connect from a lambda running inside a VPC. If I ran the lambda on a subnet that could not hit the wide open internet, the client.connect call would timeout. If I ran the lambda on a subnet that had internet access, the connect call worked fine.
I don’t know the reason, and am not digging into it as I don’t have time (if any maintainers of the code want to weigh in I’d be interested though). But, basically, it looks like part of the client.connect call requires being able to talk to the outside internet. If that’s a known requirement, it’d be nice to document.
IIUC, AWS will not freeze the Lamda if you haven’t returned yet. It only freezes when it thinks it’s idle.
You can do multiple things concurrently but if the work for your Lamda involves doing some DB interaction you can’t pretend you completed early if it’s not done. There’s no guarantee your Lamda would continue running.
You have to do that anyway if you actually care about success or failure. That’s not even specific to Lamdas. At the very least you should be handling and ignoring rejections as it could crash your process in the future:
Best to embrace the async nature of Node.js, await any promises along the way, and have your final handler return a Promise that resolves or rejects with the result.
Only pg specific advice here is to not use a Pool as it’ll get broken if the Lamda is frozen / thawed due to any outstanding TCP connections being dropped. Use a
Client
directly and doawait client.end()
before you indicate that your Lamda is complete. Or if you want something Pool-like, try something like https://github.com/brianc/node-postgres/issues/1938.Hello folks, I am running into this issue and I’d like to understand why
context.callbackWaitsForEmptyEventLoop = false
is solving. If it is true that a newClient
does not create a pool, then once.end()
is called things should be all good.Also, I see no
.destroy()
other than this on the client.Am I missing something?