winston: logging methods no longer take a callback, can't reliably use it in some environments (AWS Lambda)
In Winston 3, the logging functions no longer take a callback. Previously this could be used to wait until logging had completed:
logger.info('some message', { foo: 42 }, callback); // Winston 2.x
Winston 3 does allow you to listen to the logged
event on your transports, but that doesn’t give me any easy way to tell when this message that I am currently logging, has completed. Since everything is async the next logged
event that occurs after you write your message may not be related to the message you just logged. And it’s more complex if you are listening for the event on multiple transports.
That makes Winston difficult to use in certain environments. For example, in AWS Lambda we set the callbackWaitsForEmptyEventLoop
parameter to false
(documentation). (sidenote: you do this if you have a database connection or something that cannot be unref
ed, because otherwise Lambda will never freeze your process). If set to false
, Lambda freezes your process as soon as you return results to the caller and may even terminate your process. If your logging transport hasn’t finished writing by the time that happens, then you either lose logs or (worse) the logs get written later when Lambda unfreezes your process.
TL;DR: Our Lambda process would normally await (or the callback equivalent) on Winston before returning results to the caller, guaranteeing that logging is done before the process is frozen.
Is there another way you’d recommend to detect when logging is complete?
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 14
- Comments: 55 (13 by maintainers)
Links to this issue
Commits related to this issue
- Transports streams emit 'finish' event, not 'finished' Per my tests I had to listen to the `finish` event as `finished` was never called. I believe this is inline per this issue: https://github.co... — committed to brian-mann/winston by brian-mann 6 years ago
- Transports streams emit 'finish' event, not 'finished' (#1386) Per my tests I had to listen to the `finish` event as `finished` was never called. I believe this is inline per this issue: https://g... — committed to winstonjs/winston by brian-mann 6 years ago
- [doc] Add an example illustrating the "finish" event for AWS Lambda scenarios. Fixes #1250. — committed to Mizumaki/winston by indexzero 6 years ago
- Transports streams emit 'finish' event, not 'finished' (#1386) Per my tests I had to listen to the `finish` event as `finished` was never called. I believe this is inline per this issue: https://g... — committed to Mizumaki/winston by brian-mann 6 years ago
- switch logger from Winston to Pino (#3424) The main reason we leave Winston is that log messages are gettings lost. (See https://github.com/winstonjs/winston/issues/1250 ). We have tried many workaro... — committed to teambit/bit by davidfirst 4 years ago
I keep getting alerts on this thread, so figure I’ll share why I think this still isn’t working for a lot of people. Background: our use-case (and the context wherein this issue was opened) is AWS Lambda, so what I talk about here only applies there.
Lambda runs Node under the Lambda Execution Context. Pertinent information being:
and
i.e. to speed up launches, Lambda “freezes” and “thaws” environments instead of fully shutting them down. To do this, Lambda doesn’t wait for Node to exit, it waits for your handler function to exit. Any asynchronous processes are run outside of the handler function, and thus are apt to be frozen with the rest of the Lambda Execution Context if they are not waited for.
Now let’s look at the suggested solution for waiting for Winston - adapted from UPGRADE-3.0.md, assuming we’re running in Lambda:
Spot the problem?
logger
fireslogger.end()
within the handler function context, but the function fired bylogger.on('finish')
runs outside the handler context. Any async processes tied-up byCustomAsyncTransport
will stall thefinish
event from firing, making it likely that the Execution Context freezes before that event fires.To solve this,
lambdaHandler
must wait for the logger to exit before resolving:Since
lambdaHandler
doesn’t exit untillogger
fires thefinish
event, ourCustomAsyncTransport
should close before our lambda handler, saving those processes from being frozen (assuming thefinish
event is correctly implemented by @indexzero).This can be generalized to something similar to the code I’ve shared previously:
Hope this helps some people.
@tsaockham I have solution 😃
After 2h of searching solution I implemented this, even function waitForLogger by @dpraul didn’t worked, I am done.
Does not make it not an issue. Thanks for bringing it up.
Expected output:
Actual output:
The
finish
event is firing before all the transports have finished. The same behavior occurs when using.on('close', fn)
and the.close()
event at the end.Thanks to some input from @mcollina @davidmarkclements & @mafintosh I learned that my understanding of
pipe
semantics was flawed.'error'
and'finish'
events do not back propagate through the pipe chain.'error'
events cause unpipe to occur.Going to explore how to either back propagate these events or expose an explicit method on
Logger
that implements the technique you’re using now @dpraul (maybe using end-of-stream).@natesilva going to leave this issue open until the explicit fix is in, but in the meantime you should be able to use the technique above:
'finish'
events on all transportslogger.end()
'finish'
events have been emitted you are done logging.I still don’t know what’s causing the issue, but I found a workaround that waits for all the transports to finish!
Output:
It seems that
logger.end()
properly propagates to the transports, but fires thefinish
event before all the transports have fired theirs. The workaround is to wait on eachTransport
to fire its ownfinish
event and not rely on theLogger
at all.I think this issue should put in the first priority. For process with code like:
It will cause the process hang forever and unable to restart gracefully.
Will this issue be fixed or it will just keep closed?
@dpraul’s solution here (https://github.com/winstonjs/winston/issues/1250#issuecomment-452128291) explained the problem and workaround very well. This worked for me in AWS lambda.
One note that he called out earlier in the thread that I’ll re-iterate. You will need to create your logger AND transport objects at the beginning of the lambda handler. If you try to re-use them you’ll get errors since the transport stream will be closed when the lambda execution environment is frozen.
EDIT: this does not work
============ I’m not sure if this is the best way, nor whether it works in all cases, but so far we are handling this by doing the following:
I’ve created a
wait-for-logger.js
function as such:At the end of our handler, we wait for the function output before returning:
We are using the nodejs8.10 Lambda runtime - if you are using an older runtime, you can probably do something like the following:
After much digging, debugging and attempting to get the Loggly winston plugin work in lambda…our solution was to write our own simple logger that talks directly to the Loggly API and the console and dump Winston. 😛 Sorry, it’s not currently open source, but it’s separable from our product so I could probably publish it if there’s interest.
I was pretty disappointed this didn’t work. I just couldn’t find a way to reliably block waiting for logging to complete with a timeout. We’d end up getting errors instead when calling finish and logs would get dropped which made it a mostly useless solution for lambda. You couldn’t be sure if there was an error and hence you didn’t get logs or if the logs were just lost. After rolling our own (which I usually try to avoid) it was clear there were problems in the logging.
I’m not sure if the issue was in the Loggly Winston plugin in our case or with Winston itself. But, it seemed to boil down to Winston. Either way, it was disappointing.
-Johnny
On Tue, Jun 8, 2021 at 11:41 AM Aidan Ayala @.***> wrote:
Oh the problem with above is that I try to close an already closed logger so promise never resolves, this fixes it
@natesilva thought about this some more
winston@3
both theLogger
and theTransport
are Node.js streams..end
method and a'finish'
event.Therefore in theory this should work, but I have not tested it yet:
I know this is a long old thread but I will share how i could fix it. First, i want to thank @kasvtv and @dpraul for their shared approach that helped me define the following implementation:
The
key
here is to define the function_final
And in order to wait for the logger before closing the process, you can
The
map
is still necessary due to thefinish
event being fired oncelog.end()
is called and the first pendingcallback
function insidelog
function is fired. If you have multiple log functions pending to call thecallback
function (async logging), you have to wait for every pending log and here is when the_final
function enters to action.I hope this helps!
I’m not certain. At the time of writing (which was in 2019, so YMMV), the logger wouldn’t fire the
finish
event untillogger.end()
was called. ThewaitForLogger
function calls this after setting up the listener, so all is good. If you’re ending the logger elsewhere, I’m not sure what the behaviour will be like.@hdpa that’s probably because after you close it your function is reused, our current solution is this:
then in our function we just call
getLogger()
to get the logger which automatically reopens it if it’s closed and at the end of the function we usecloseLogger()
to make sure it flushes all the logs to cloudwatchTell me pls, in winston@3 i can’t use callbacks like in version 2? (there is no answer in the documentation)
And if not use callback my app closes faster than message was logged =(
@indexzero yes, that would be correct.
We’re using a different solution.