trivy-operator: "invalid character 'Q' looking for beginning of value"

Hello, after deploying trivy-operator chart we are seeing quite a lot of errors from trivy-operator pod:

{"level":"error","ts":"2023-03-08T03:18:05Z","msg":"Reconciler error","controller":"job","controllerGroup":"batch","controllerKind":"Job","Job":{"name":"scan-vulnerabilityreport-7d6f4d5d7d","namespace":"trivy-operator"},"namespace":"trivy-operator","name":"scan-vulnerabilityreport-7d6f4d5d7d","reconcileID":"894c8f5a-0b6e-4603-a4b9-79ab34dffafa","error":"invalid character 'Q' looking for beginning of value","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:274\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:235"}

What steps did you take and what happened:

Our trivy-operator config looks like this:

trivy:
  mode: ClientServer
  serverURL: http://trivy-service.trivy-operator:4954
  serverInsecure: true
  ignoreUnfixed: true
  additionalVulnerabilityReportFields: "Target,Class"
  timeout: "10m0s"
  resources:
    requests:
      cpu: 200m
      memory: 512Mi
    limits:
      cpu: 400m
      memory: 1024Mi

trivyOperator:
  scanJobCompressLogs: false

resources:
  requests:
    cpu: 200m
    memory: 512Mi
  limits:
    cpu: 400m
    memory: 1024Mi

operator:
  rbacAssessmentScannerEnabled: false
  infraAssessmentScannerEnabled: false
  builtInTrivyServer: true
  metricsFindingsEnabled: false
  exposedSecretScannerEnabled: false
  webhookBroadcastURL: "http://postee.trivy-operator:8082"

serviceMonitor:
  enabled: false

Environment:

  • Trivy-Operator version (use trivy-operator version): 0.12.0
  • Kubernetes version (use kubectl version): 1.23.15

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 39 (22 by maintainers)

Most upvoted comments

@d-mankowski-synerise thanks you for this update , I’ll take a look and update you.

Sure, no problem.

$ k describe pod burrow-kafka-logs-p2-7c7bc66c66-f6zhn
Name:         burrow-kafka-logs-p2-7c7bc66c66-f6zhn
Namespace:    monitoring
Priority:     0
Node:         xxx/172.26.46.19
Start Time:   Tue, 28 Feb 2023 17:22:40 +0000
Labels:       app=burrow-kafka-logs-p2
              pod-template-hash=7c7bc66c66
              tier=api
Status:       Running
IP:           172.26.46.160
IPs:
  IP:           172.26.46.160
Controlled By:  ReplicaSet/burrow-kafka-logs-p2-7c7bc66c66
Containers:
  burrow:
    Container ID:   containerd://9c2cbe3f6edd8c7522fc27911b4d177445cbcf6ebe8b323afce938d81fa15f18
    Image:          pgacek/burrow:v1.3.0
    Image ID:       docker.io/pgacek/burrow@sha256:88904ec24506cc9fd5dec5471aec9ebe399aaf05fee0d2e60634e5f01e145bf1
    Port:           8000/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Tue, 28 Feb 2023 17:22:44 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8000/burrow/admin delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8000/burrow/admin delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /etc/burrow from config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dzqhp (ro)
  telegraf:
    Container ID:   containerd://5f52b002c6b71f34e276e6af0bcd67b9f365dddbca5939aba4d0ac6c566e390a
    Image:          library/telegraf:1.7
    Image ID:       docker.io/library/telegraf@sha256:a47acd8de5e04c99ab6f0f45a786e380463eea640bb9a4b21ecfcae2eda135c1
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Tue, 28 Feb 2023 17:22:56 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/telegraf from config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dzqhp (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      burrow-kafka-logs-p2-config
    Optional:  false
  kube-api-access-dzqhp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

The k8s manifest we use is something like this (I didn’t deploy & test it, though)

---
apiVersion: v1
data:
  burrow.toml: |-
    [logging]
    level="warn"

    [storage.inmemory]
    class-name="inmemory"
    workers=50

    [zookeeper]
    servers=['zk-svc.jsons:2181']
    timeout=6
    root-path="/burrow"

    [cluster.kafka2-p-vm]
    class-name="kafka"
    client-profile="default-client"
    servers=['kafka-int-0.jsons:9092', 'kafka-int-1.jsons:9092', 'kafka-int-2.jsons:9092', 'kafka-int-3.jsons:9092']

    [consumer.kafka2-p-vm]
    class-name="kafka"
    cluster="kafka2-p-vm"
    client-profile="default-client"
    servers=['kafka-int-0.jsons:9092', 'kafka-int-1.jsons:9092', 'kafka-int-2.jsons:9092', 'kafka-int-3.jsons:9092']
    start-latest=true
    group-denylist="xyz"
    group-allowlist=""

    [httpserver.default]
    address=":8000"

    [client-profile.default-client]
    kafka-version="2.8.1"
  telegraf.conf: |-
    [agent]
    interval = "15s"
    [[inputs.burrow]]
    servers = ["http://localhost:8000"]
    topics_exclude = ["*"]
    response_timeout = "10s"
    concurrent_connections = 64

    [[outputs.prometheus_client]]
    listen = ":8080"
    path = "/"
    expiration_interval = "60s"
    string_as_label = true
    tagexclude = ["host"]
kind: ConfigMap
metadata:
  name: burrow-config
  namespace: monitoring
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: burrow
  name: burrow
  namespace: monitoring
spec:
  ports:
    - name: api
      port: 80
      protocol: TCP
      targetPort: 8000
  selector:
    app: burrow
    tier: api
  type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: burrow-metrics
  name: burrow-metrics
  namespace: monitoring
spec:
  ports:
    - name: metrics
      port: 80
      protocol: TCP
      targetPort: 8080
  selector:
    app: burrow
    tier: api
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: burrow
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app: burrow
      color: blue
      tier: api
  strategy:
    rollingUpdate:
      maxSurge: 60%
      maxUnavailable: 30%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: burrow
    spec:
      containers:
      - image: pgacek/burrow:v1.3.0
        livenessProbe:
          httpGet:
            path: /burrow/admin
            port: 8000
        name: burrow
        ports:
        - containerPort: 8000
          name: api
        readinessProbe:
          httpGet:
            path: /burrow/admin
            port: 8000
        volumeMounts:
        - mountPath: /etc/burrow
          name: config
      - image: docker.io/library/telegraf:1.7
        name: telegraf
        ports:
        - containerPort: 8080
          name: web
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/telegraf
          name: config
      volumes:
      - configMap:
          name: burrow-config
        name: config
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    app: burrow
  name: burrow
  namespace: monitoring
spec:
  endpoints:
    - honorLabels: true
      path: /
      port: metrics
  jobLabel: app
  selector:
    matchLabels:
      app: burrow-metrics