fluent-bit: From fluent-bit to es: [ warn] [engine] failed to flush chunk
Bug Report
Describe the bug
Continuing logging in pod fluent-bit-84pj9
[2022/03/22 03:48:51] [ warn] [engine] failed to flush chunk ‘1-1647920930.175942635.flb’, retry in 11 seconds: task_id=735, input=tail.0 > output=es.0 (out_id=0) [2022/03/22 03:48:51] [ warn] [engine] failed to flush chunk ‘1-1647920894.173241698.flb’, retry in 58 seconds: task_id=700, input=tail.0 > output=es.0 (out_id=0) [2022/03/22 03:57:46] [ warn] [engine] failed to flush chunk ‘1-1647920587.172892529.flb’, retry in 92 seconds: task_id=394, input=tail.0 > output=es.0 (out_id=0) [2022/03/22 03:57:47] [ warn] [engine] failed to flush chunk ‘1-1647920384.178898202.flb’, retry in 181 seconds: task_id=190, input=tail.0 > output=es.0 (out_id=0) [2022/03/22 03:57:47] [ warn] [engine] failed to flush chunk ‘1-1647920812.174022994.flb’, retry in 746 seconds: task_id=619, input=tail.0 > output=es.0 (out_id=0) [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk ‘1-1647920205.172447077.flb’, retry in 912 seconds: task_id=12, input=tail.0 > output=es.0 (out_id=0) [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk ‘1-1647920426.171646994.flb’, retry in 632 seconds: task_id=233, input=tail.0 > output=es.0 (out_id=0) [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk ‘1-1647920802.180669296.flb’, retry in 1160 seconds: task_id=608, input=tail.0 > output=es.0 (out_id=0) [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk ‘1-1647920969.178403746.flb’, retry in 130 seconds: task_id=774, input=tail.0 > output=es.0 (out_id=0) [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk ‘1-1647920657.177210280.flb’, retry in 1048 seconds: task_id=464, input=tail.0 > output=es.0 (out_id=0) [2022/03/22 03:57:49] [ warn] [engine] failed to flush chunk ‘1-1647920670.171006292.flb’, retry in 1657 seconds: task_id=477, input=tail.0 > output=es.0 (out_id=0) [2022/03/22 03:57:49] [ warn] [engine] failed to flush chunk ‘1-1647920934.181870214.flb’, retry in 786 seconds: task_id=739, input=tail.0 > output=es.0 (out_id=0)
To Reproduce
- Rubular link if applicable:
- Example log message if applicable:
{"log":"YOUR LOG MESSAGE HERE","stream":"stdout","time":"2018-06-11T14:37:30.681701731Z"}
- Steps to reproduce the problem:
-
use helm to install helm-charts-fluent-bit-0.19.19. fluent/fluent-bit 1.8.12
-
edit the value.yaml, change to es ip, 10.3.4.84 is the es ip address. [OUTPUT] Name es Match kube.* Host 10.3.4.84 Logstash_Format On Retry_Limit False
[OUTPUT] Name es Match host.* Host 10.3.4.84 Logstash_Format On Logstash_Prefix node Retry_Limit False
keep other configs in value.yaml file by default.
- helm install helm-charts-fluent-bit-0.19.19. with the updated value.yaml file.
- wait and see the fluent-bit pod logs.
Expected behavior
all logs sent to es, and display at kibana
Screenshots
Your Environment
-
Version used: helm-charts-fluent-bit-0.19.19.
-
Configuration: [OUTPUT] Name es Match kube.* Host 10.3.4.84 Logstash_Format On Retry_Limit False
[OUTPUT] Name es Match host.* Host 10.3.4.84 Logstash_Format On Logstash_Prefix node Retry_Limit False
-
Environment name and version (e.g. Kubernetes? What version?): k3s 1.19.8, use docker-ce backend, 20.10.12. Kibana 7.6.2 management. es 7.6.2 fluent/fluent-bit 1.8.12
-
Server type and version:
-
Operating System and version: centos 7.9, kernel 5.4 LT
-
Filters and plugins: edit the value.yaml, change to es ip, 10.3.4.84 is the es ip address. no tls required for es. [OUTPUT] Name es Match kube.* Host 10.3.4.84 Logstash_Format On Retry_Limit False
[OUTPUT] Name es Match host.* Host 10.3.4.84 Logstash_Format On Logstash_Prefix node Retry_Limit False
keep other configs in value.yaml file by default.
Additional context
no new logs send to es
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 20 (6 by maintainers)
I had similar issues with
failed to flush chunk
in fluent-bit logs, and eventually figured out that the index I was trying to send logs to already had a_type
set todoc
, while fluent-bit was trying to send with_type
set to_doc
(which is the default). SettingType doc
in the es OUTPUT helped in my case.I am getting the same error. I have deployed the official helm chart version 0.19.23.
I only changed the output config since its a subchart. I have also set
Replace_Dots On
.version of docker image
fluent/fluent-bit:1.9.0-debug
There same issues and after setTrace_Error
On error logs hereand the index
ks-logstash-log-2022.03.22
already existsis any way to skip create index if exists?
as https://github.com/fluent/fluent-bit/issues/3301#issuecomment-1308120843 said,I add Trace_Error On to show more log,then i found the reason is https://github.com/fluent/fluent-bit/issues/4386.you must delete the exist index,otherwise even you add Replace_Dots,you still see the warn log.
set config of es’Output
works on
fluent/fluent-bit:1.9.0