serilog-sinks-elasticsearch: Allow to forget Elasticsearch shipping errors.
Hi.
In the context of a particular application, where I am using this sinks to store log events in durable mode, happens that if there is any error on my Elasticsearch server I don’t want my app to have any issue of any kind because of it. If the Elastic server remains off for a long time then this sink continues saving the messages on disk and retrying to send them all: when the Elasctic server comes alive then all the messages are sent and this means consuming CPU and afecting my app at the end. For me it is peferible to lose messages rather than assuring to save them all having a CPU consumption peak.
Could it be possible just to add a new parameter in the options to say that I prefer to lose messages rather than retrying?
It should be as easy as modifying the file “ElasticSearchLogShipper.cs” on the following peace of code:
`if (count > 0)
{
var response = _state.Client.Bulk<DynamicResponse>(payload);
if (response.Success) // <---- here would come the change
{
WriteBookmark(bookmark, nextLineBeginsAtOffset, currentFilePath);
_connectionSchedule.MarkSuccess();
}
else
{
_connectionSchedule.MarkFailure();
SelfLog.WriteLine("Received failed ElasticSearch shipping result {0}: {1}", response.HttpStatusCode, response.OriginalException);
break;
}
}`
Thank vou very much for your time. Jose
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 1
- Comments: 16 (6 by maintainers)
Thanks @nblumhardt. Will have a look at it. My biggest concern with the current implementation is that the durable sink implements its own logic again to submit items to ES. So a lot of duplication.
Howdy, friends 😃
@mivano a couple of related issues have come through the Seq sink, and recent changes improve behavior in both durable and non-durable scenarios. Just making some notes here for you, in case it’s useful.
First, in non-durable mode, the Seq sink now accepts a
queueSizeLimit
:https://github.com/serilog/serilog-sinks-seq/blob/dev/src/Serilog.Sinks.Seq/SeqLoggerConfigurationExtensions.cs#L82
This caps the number of events that will be buffered in memory, to avoid unpredictable memory usage. The feature is actually implemented under-the-hood by Serilog.Sinks.PeriodicBatching, so should be easy to integrate into the ES sink if it hasn’t been done already.
Second, in durable mode, it’s now possible to limit the amount of disk space the buffer files can consume, instead of just limiting the buffered data per day. This one is quite recent:
https://github.com/serilog/serilog-sinks-seq/pull/92
There’s more information in the PR on how it’s implemented.
Hope this helps!
That does not make sense to me - you are using durable mesasaging so you don’t lose messages, but you are saying you want to lose messages in the case where elastic has gone down, which is exactly what this is to guard against. Why not just log in non-durable mode indeed? Perhaps I am missing something…