fluent-bit: spurious field mapping errors in ES output
Bug Report
Describe the bug
I see the error messages shown below in the log of my fluent-bit pods.
according to kubectl get pods -l app --all-namespaces --show-labels
we only use values for the app
label that match /[:alpha:][-:alphanum:]*[:alpha]/
I have no idea where the app object is coming from.
[2019/02/25 09:31:35] [ warn] [out_es] Elasticsearch error
{"took":2,"errors":true,"items":[{"index":{"_index":"kubernetes-2019.02.25","_type":"flb_type","_id":"nt_-I2kBNo3QnVfwzT9R","status":400,"error":{"type":"mapper_parsing_exception","reason":"object mapping for [kubernetes.labels.app] tried to parse field [app] as object, but found a concrete value"}}},{"index":{"_index":"kubernetes-2019.02.25","_type":"flb_type","_id":"n9_-I2kBNo3QnVfwzT9R","status":400,"error":{"type":"mapper_parsing_exception","reason":"object mapping for [kubernetes.labels.app] tried to parse field [app] as object, but found a concrete value"}}},{"index":{"_index":"kubernetes-2019.02.25","_type":"flb_type","_id":"oN_-I2kBNo3QnVfwzT9R","status":400,"error":{"type":"mapper_parsing_exception","reason":"object mapping for [kubernetes.labels.app] tried to parse field [app] as object, but found a concrete value"}}},{"index":{"_index":"kubernetes-2019.02.25","_type":"flb_type","_id":"od_-I2kBNo3QnVfwzT9R","status":400,"error":{"
Expected behavior
all fields of the app
label to be unstructured strings.
Your Environment
- Version used: fluent-bit 1.0.4 via helm 1.7.0, ES 6.6.0 via helm chart 1.21.0
- Configuration:
my config map is:
apiVersion: v1
kind: ConfigMap
metadata:
name: logging-fluent-bit-config
labels:
app: fluent-bit
chart: fluent-bit-1.8.0
heritage: Tiller
release: logging
data:
fluent-bit-service.conf: |-
[SERVICE]
Flush 1
Daemon Off
Log_Level info
Parsers_File parsers.conf
Parsers_File parsers_custom.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
fluent-bit-input.conf: |-
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag kube.*
Refresh_Interval 5
Mem_Buf_Limit 5MB
Skip_Long_Lines On
DB /tail-db/tail-containers-state.db
DB.Sync Normal
[INPUT]
Name systemd
Tag systemd.*
Systemd_Filter _SYSTEMD_UNIT="docker.service"
Systemd_Filter _SYSTEMD_UNIT="kubelet.service"
Systemd_Filter _SYSTEMD_UNIT="etcd-member.service"
Systemd_Filter _SYSTEMD_UNIT="etcd-backup.service"
Systemd_Filter _SYSTEMD_UNIT="consul.service"
Systemd_Filter _SYSTEMD_UNIT="consul-backup.service"
Systemd_Filter _SYSTEMD_UNIT="vault.service"
Max_Entries 1000
Read_From_Tail true
[INPUT]
Name syslog
Mode udp
Port 5140
Tag neuvector.*
Parser syslog-neuvector
Chunk_Size 32
Buffer_Size 64
[INPUT]
Name systemd
Tag audit.system.*
Systemd_Filter _TRANSPORT=audit
DB /tail-db/tail-audit-system-state.db
DB.Sync Normal
[INPUT]
Name systemd
Tag audit.vault.*
Systemd_Filter SYSLOG_IDENTIFIER=vault-audit
DB /tail-db/tail-audit-vault-state.db
DB.Sync Normal
[INPUT]
Name tail
Path /var/log/audit/apiserver/apiserver.log
Tag audit.apiserver.*
Parser parse-audit-apiserver
Refresh_Interval 5
Mem_Buf_Limit 5MB
Skip_Long_Lines On
DB /tail-db/tail-audit-apiserver-state.db
DB.Sync Normal
fluent-bit-filter.conf: |-
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Merge_Log On
# suppres structured app labels
[FILTER]
Name record_modifier
Match kube.*
# this section is wrongly indented by helm chart
fluent-bit-output.conf: |-
[OUTPUT]
Name es-kubernetes
Match kube.*
Host logging-elasticsearch-client
Port 9200
Logstash_Format On
Retry_Limit False
Type flb_type
Logstash_Prefix kubernetes
[OUTPUT]
Name es-systemd
Match systemd.*
Host logging-elasticsearch-client
Port 9200
Logstash_Format On
Retry_Limit False
Type flb_type
Logstash_Prefix systemd
[OUTPUT]
Name es-autit-system
Match audit.system.*
Host logging-elasticsearch-client
Port 9200
Logstash_Format On
Retry_Limit False
Type flb_type
Logstash_Prefix audit-system
[OUTPUT]
Name es-audit-vault
Match audit.vault.*
Host logging-elasticsearch-client
Port 9200
Logstash_Format On
Retry_Limit False
Type flb_type
Logstash_Prefix audit-vault
[OUTPUT]
Name es-audit-apiserver
Match audit.apiserver.*
Host logging-elasticsearch-client
Port 9200
Logstash_Format On
Retry_Limit False
Type flb_type
Logstash_Prefix audit-apiserver
[OUTPUT]
Name es-neuvector
Match neuvector.*
Host logging-elasticsearch-client
Port 9200
Logstash_Format On
Retry_Limit False
Type flb_type
Logstash_Prefix neuvector
fluent-bit.conf: |-
@INCLUDE fluent-bit-service.conf
@INCLUDE fluent-bit-input.conf
@INCLUDE fluent-bit-filter.conf
@INCLUDE fluent-bit-output.conf
parsers.conf: |-
[PARSER]
Name syslog-neuvector
Format regex
Regex /^\<(?<priority>[0-9]+)\>(?<time>[^ Z]*)Z (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?\:? *(?<full_message>(?:(?:notification=(?<notification>[^,]*))|(?:name=(?<name>[^,]*))|(?:level=(?<level>[^,]*))|(?:reported_timestamp=(?<reported_timestamp>[^,]*))|(?:reported_at=(?<reported_at>[^,]*))|(?:cluster_name=(?<cluster_name>[^,]*))|(?:host_id=(?<host_id>[^,]*))|(?:host_name=(?<host_name>[^,]*))|(?:controller_id=(?<controller_id>[^,]*))|(?:controller_name=(?<controller_name>[^,]*))|(?:enforcer_id=(?<enforcer_id>[^,]*))|(?:enforcer_name=(?<enforcer_name>[^,]*))|(?:workload_id=(?<workload_id>[^,]*))|(?:workload_name=(?<workload_name>[^,]*))|(?:workload_domain=(?<workload_domain>[^,]*))|(?:workload_image=(?<workload_image>[^,]*))|(?:category=(?<category>[^,]*))|(?:user=(?<user>[^,]*))|(?:user_roles=(?<user_roles>[^,]*))|(?:user_addr=(?<user_addr>[^,]*))|(?:user_session=(?<user_session>[^,]*))|(?:message=(?<message>[^,]*))|[^,=]+=[^,=]*|,+)*)$/
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S
[PARSER]
Name parse-audit-apiserver
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S %z
Decode_Field json log
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 28 (7 by maintainers)
@mtparet I rewrote it into this
š please include a native dedot option in fluent-bit (and not only in ES plugin)
We solved this by adding a Lua filter after the Kubernetes filter:
It would be cool to have this support natively though.
What to do users which use s3 or Kafka as transport? Can anybody add Replace_Dots as a filter?
But, weāre still waiting to see dedot option to be included upstream š¢
Fluent Bit v1.6.10 with Replace_Dots in ES output solved the problem for me without using Lua.
FluentBit has an output ES config setting āReplace_Dotsā, will that solve the problem ? https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch Replace_Dots When enabled, replace field name dots with underscore, required by Elasticsearch 2.0-2.3.
@tirelibirefe
k8s questions are a bit out of scope here. And it seems that you just donāt know how to do it š
You can simply mount a configMap with lua script and fluent-bit config. and load it in rawConfig chart value.
Thereās few details from my own config here : https://gist.github.com/BarthV/17521ff8cc5c5b3b704c9e2f491c7e60