fluentd-kubernetes-daemonset: [in_tail_container_logs] pattern not matched - tried everything, not sure what I am missing
Hi,
I am using https://github.com/fluent/fluentd-kubernetes-daemonset/tree/master/docker-image/v1.9/debian-cloudwatch as reference to install this image as a sidecar in a kubernetes pod.
Added these additional ENVs
ENV FLUENTD_SYSTEMD_CONF disable
ENV FLUENTD_PROMETHEUS_CONF disable
ENV FLUENT_ELASTICSEARCH_SED_DISABLE "true"
along with Log group name and region for AWS Cloudwatch
fluent.conf
@include "#{ENV['FLUENTD_SYSTEMD_CONF'] || 'systemd'}.conf"
@include "#{ENV['FLUENTD_PROMETHEUS_CONF'] || 'prometheus'}.conf"
@include kubernetes.conf
@include conf.d/*.conf
<match **>
@type cloudwatch_logs
@id out_cloudwatch_logs
log_group_name "#{ENV['LOG_GROUP_NAME']}"
auto_create_stream true
use_tag_as_stream true
retention_in_days "7"
json_handler yajl # To avoid UndefinedConversionError
log_rejected_request "true" # Log rejected request for missing parts
</match>
kubernetes.conf
<label @FLUENT_LOG> <match fluent.**> @type null </match> </label>
<source>
@type tail
@id in_tail_container_logs
path /root/.pm2/logs/*.log
pos_file /root/.pm2/sys.log.pos
tag kubernetes.*
exclude_path "#{ENV['FLUENT_CONTAINER_TAIL_EXCLUDE_PATH'] || use_default}"
read_from_head true
<parse>
@type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
Fluentd is tailing the logs but no matter what config i try -
2020-04-30 13:08:40 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "\e[0mGET / \e[32m200 \e[0m45.213 ms - 170\e[0m"
2020-04-30 13:08:41 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "{"
2020-04-30 13:08:41 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: " \"level\": \"info\","
2020-04-30 13:08:41 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: " \"message\": \"hello from slash\","
2020-04-30 13:08:41 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: " \"timestamp\": \"2020-04-30T13:08:41.075Z\""
2020-04-30 13:08:41 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "}"
2020-04-30 13:08:41 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "logloglog"
2020-04-30 13:08:41 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "\e[0mGET / \e[32m200 \e[0m10.925 ms - 170\e[0m"
What am I missing? If you look closely it has plain text, it has a JSON i have tried everything.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 28 (3 by maintainers)
Hi @dyipon, regarding your comment on https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434#issuecomment-697307365, how do you exactly install/save this configuration on k3s?
I am currently using this instruction https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes but for some reason i have the same behavior as yours. I am also on K3s…
I searched for
kubernetes.conf
and i see the following files but I’m not sure which one is relevant here.Update
Okay, after further fumbling around the internets. I seemed to have figured out how to go over this. Still based on the context that I’m following this Guide.
I edited the
fluentd.yaml
according to this.FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
prevents the circular log.FLUENT_CONTAINER_TAIL_PARSER_TYPE
ensures it can read the logs.same here with a k3s cluster
2020-09-23 07:48:03 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "2020-09-23T09:47:45.307936186+02:00 stdout F 2020-09-23 07:47:45 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: \"2020-09-23T09:47:34.334985467+02:00 stdout F \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\""
I’ve struggled a lot to get it to work and I have no idea if it’s even the proper way. @agup006
I stumbled upon this semi-working config on a separate issue here.
I’ll paste a copy of the working conf here, but for reference this is my post in the other issue about it
works for me:
env:
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: cri
For someone like me, I’m using:
fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
and this config does not work. But the same config in source work for me:geez, I spent three days to resolve several issues, than finally found env values from your comment! Thankyou so much!
In v1.12 this works - (the earlier setting https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434#issuecomment-752813739 is not longer present from v1.12 )
In case someone wants the files for EFK for testing - https://medium.com/techlogs/a-working-efk-configuration-for-gcp-kubernetes-aa727bc173da
I can confirm the parser is in recent versions of the image but you have to override the
tail_container_parse.conf
with the following (using a configmap and volumemount):While I did have an issue with fluentd logs perpetually causing the pods themselves to die (from ingesting their own logs) it still doesn’t work.
My outputs are single line JSON objects but fluentd still fails to parse them as json.
Source definition
The logs as-is directly from the container
This is how fluentd picks them up.
Is the date prefix injected by something?
Found my way here trying to get fluentd working with elasticsearch and so far not a single log from anything other than the kubelet is able to make it to the elasticsearch cluster.
(I suspect from the lack of answer to OP since April is a good sign this also will be ignored)
Edit: It seems to be an issue between the logs being emitted from the container and what is being written to the log file. Something is prefixing all logs with the
<timestamp> <stdout/stderr> <?> <message>
"exclude_path ["/var/log/containers/fluent*"]"
in kubernetes.conf solved itFor me I needed to change the time format from:
time_format '%Y-%m-%dT%H:%M:%S.%NZ'
totime_format '%Y-%m-%dT%H:%M:%S.%N%:z'
I’m using the fluentd helm chart, so I added this section:
Sorry, I didn’t get enough time to read about your setup, but I hope this is helpful for others. The root cause of this seems to be that the default expects a ‘Z’ for the timezone (UTC if I understand correct), and containterd (or whatever is creating your logs) is logging in your local timezone.