fluent-operator: help request: Tail and systemd clusterinputs not working.

Describe the issue

I have a simple set of manifests based on the quickstart that simply configure a fluentbit that takes input from the dummy, container logs (tail) and kubernetes system logs (systemd) and outputs them to stdout.

Now the dummy works great, however the systemd and the tail inputs are not working.

---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: FluentBit
metadata:
  name: fluent-bit
  namespace: fluent
  labels:
    app.kubernetes.io/name: fluent-bit
spec:
  image: kubesphere/fluent-bit:v2.1.7
  fluentBitConfigName: fluent-bit-config
  positionDB:
    hostPath:
      path: /var/lib/fluent-bit/
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFluentBitConfig
metadata:
  name: fluent-bit-config
  namespace: fluent
  labels:
    app.kubernetes.io/name: fluent-bit
spec:
  inputSelector:
    matchLabels:
      fluentbit.fluent.io/enabled: "true"
  outputSelector:
    matchLabels:
      fluentbit.fluent.io/enabled: "true"
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterInput
metadata:
  name: docker
  namespace: fluent
  labels:
    fluentbit.fluent.io/enabled: "true"
spec:
  systemd:
    tag: service.*
    path: /var/log/journal
    db: /fluent-bit/tail/systemd.db
    dbSync: Normal
    systemdFilter:
      - _SYSTEMD_UNIT=docker.service
      - _SYSTEMD_UNIT=kubelet.service
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterInput
metadata:
  name: tail
  namespace: fluent
  labels:
    fluentbit.fluent.io/enabled: "true"
spec:
  tail:
    tag: kube.*
    path: /var/log/containers/*.log
    refreshIntervalSeconds: 10
    memBufLimit: 5MB
    skipLongLines: true
    db: /fluent-bit/tail/pos.db
    dbSync: Normal
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterInput
metadata:
  name: dummy
  namespace: fluent
  labels:
    fluentbit.fluent.io/enabled: "true"
spec:
  dummy:
    tag: my_dummy
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterOutput
metadata:
  name: stdout
  namespace: fluent
  labels:
    fluentbit.fluent.io/enabled: "true"
spec:
  match: "*"
  stdout: {}

Output looks like this even when killing pods and confirming they are writing to stdout (via kubectl logs ....):

[0] my_dummy: [[1690707109.298142698, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707110.298840841, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707111.299267477, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707112.298684587, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707113.299942396, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707114.299799594, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707115.299161966, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707116.298993463, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707117.298655826, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707118.298470887, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707119.299251398, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707120.298838334, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707121.298885289, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707122.298978125, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707123.298685272, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707124.298576172, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707125.298208337, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707126.298342160, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707127.298894887, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707128.298762534, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707129.298756377, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707130.298633375, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707131.298095696, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707132.300729192, {}], {"message"=>"dummy"}]
[0] my_dummy: [[1690707133.299339113, {}], {"message"=>"dummy"}]

Any idea how to troubleshoot this?

How did you install fluent operator?

helm chart

Additional context

2.4.0

@wenchajun @benjaminhuo

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 35 (9 by maintainers)

Most upvoted comments

This error is because the file does not exist or is not a regular file. So make sure you set the right containerLogRealPath.

1690855846864

Run ls -al /var/log/pods/kube-system_kube-apiserver-pa-drs-cluster_aff2d73431c0215ccd9809ea48d93ff6/kube-apiserver/0.log to find the real path of log.