fluent-bit: error initializing backend file(Cannot allocate memory)

Bug Report

Catch below error

[2019/01/18 11:28:15] [error] [storage] [cio chunk] error initializing backend file [2019/01/18 11:28:15] [error] [input chunk] could not create chunk file [2019/01/18 11:28:15] [error] [input chunk] no available chunk [lib/chunkio/src/cio_file.c:254 errno=12] Cannot allocate memory [2019/01/18 11:28:15] [error] [storage] cannot mmap file /var/log/fluent-bit//tail.1/145563-1547782095.129903982.flb

To Reproduce

  1. storage.type=filesystem
  2. run fluent-bit for a long time

Your Environment

  • Version used: v1.0.1
  • Configuration:

`

@INCLUDE input_*.conf
@SET DBFILE=/etc/fluent-bit/tailfile.db
[SERVICE]
    Flush        5
    Daemon       Off
    Log_Level    info
    HTTP_Server  Off
    storage.path /var/log/fluent-bit/
    storage.sync normal
    Parsers_File parser.conf

[OUTPUT]
    Name http
    Match *
    Host 127.0.0.1
    Port 56924
    URI /api/v1/logs
    Format json
    json_date_key time
    json_date_format iso8601
    Retry_Limit False

`

  • Server type and version:
  • Operating System and version: CentOS Linux release 7.4.1708 (Core)
  • Filters and plugins:

`

[FILTER]
    Name grep
    Match *
    Regex name .+

input files

[INPUT]
    Name tail
    storage.type filesystem
    DB ${DBFILE}
    Path /var/log/xx/xxx.log
    Parser parse1

[INPUT]
    Name tail
    storage.type filesystem
    DB ${DBFILE}
    Path /var/log/yyy/yyy.log
    Parser parser2

`

thanks a lot

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 15 (12 by maintainers)

Commits related to this issue

Most upvoted comments

thanks everyone for helping troubleshoot this issue.

The fix (https://github.com/fluent/fluent-bit/commit/19c24380855266c739b623e60eb756b872356331) is already in place in GIT Master and backported for v1.0.4 release.

Let me guess, you run the program on the elastic host? Elastic sets this parameter for itself.

That was my first bet but no: this value is the same on all our nodes. ulimit -n, however, returns 65536 on all the nodes.

OK, it seems I confused max open files (result of ulimit -n) and max map count (result of sysctl vm.max_map_count) values. Max map count is set to 262144 on all our nodes (I don’t know why, though).

I’m also hit by this issue. Seems like the program doesn’t free its maps and when it hits the kernel limit it doesn’t crash, it just sits there unable to mmap new files, however it segfaults on shutdown.