aws-sdk-js: Memory leak when putting records to Kinesis
Using Kinesis with nodejs
(in coffescript
), isolated the issue to the basic putting of events to Kinesis:
kinesis = new AWS.Kinesis
region: 'us-east-1'
kinesis.putRecord
Data: JSON.stringify(event),
PartitionKey: event.sessionId,
StreamName: "eventStream"
, (err, data) ->
if err
throw JSON.stringify err
Looking at process memory usage, on cold boot the processes use roughly 2-3% of memory, on first ab -t 30 -c 20
run memory usages in process goes to ~10%, on second to ~20%, on third 30%… Tried with just the init code (new AWS.Kinesis
) and the issue goes away. Tried with 0.8.22
and with 0.10.30
, both exhibit the same behaviour - if anything, memory usage seems to spike even faster on 0.10.30
(25% after first run)).
I’ve seen some issues that were related to timeouts - I suspect this is related? If so, what are the best current recommendations to resolving it?
aws-sdk version that I’m using is 2.0.9.
About this issue
- Original URL
- State: closed
- Created 10 years ago
- Comments: 22 (10 by maintainers)
Hi @chrisradek basically we experienced multiple connection timeouts when load increased. as it turned out, aws.js sdk impl doesn’t enable connections keep-alive by default, thus pushing node to it’s memory resource limit on stress times until you lose connections or your node dies. the fix is rather simple, I suggest you doc it better, or even code it by default:
AWS.config.update({ httpOptions: { agent: new https.Agent({keepAlive: true}) } });