opentelemetry-collector-contrib: [exporter/loki] Getting error message ->"error": "Permanent error: failed to transform logs into Loki log streams"

Describe the bug I am trying to use the Loki exporter along with the windowseventlog receiver to capture the system logs of a windows VM on Grafana. But while running the \cmd\otelcontribcol\main.exe, I am getting the error message {“error”: “Permanent error: failed to transform logs into Loki log streams”}. I am also logging it in in a json file and able to view the logs over there

What config did you use? Config: (e.g. the yaml config file)

receivers:
  windowseventlog:
    start_at: beginning
    channel: system

processors:

  batch:
    send_batch_size: 50
    timeout: 5s
  memory_limiter:
    check_interval: 2s
    limit_mib: 1800
    spike_limit_mib: 500


exporters:
  file:
    path: C:/Users/..../logfilename.json

  loki:
    endpoint: "https://234372:(****APIKEY****)@logs-prod3.grafana.net/loki/api/v1/push"
    format: "json"
    labels:
     record:
       severity: "SeverityText"

service:
  pipelines:
    logs:
      receivers: [windowseventlog]
      processors: [batch]
      exporters: [file,loki]

Environment OS: Windows Server 2022 Datacenter

Error Message:

2022-07-04T17:24:54.846Z        error   exporterhelper/queued_retry.go:183      Exporting failed. The error is not retryable. Dropping data.    {"kind": "exporter", "data_type": "logs", "name": "loki", "error": "Permanent error: failed to transform logs into Loki log streams", "dropped_items": 82}
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
        C:/Users/oatazureuser/go/pkg/mod/go.opentelemetry.io/collector@v0.54.0/exporter/exporterhelper/queued_retry.go:183
go.opentelemetry.io/collector/exporter/exporterhelper.(*logsExporterWithObservability).send
        C:/Users/oatazureuser/go/pkg/mod/go.opentelemetry.io/collector@v0.54.0/exporter/exporterhelper/logs.go:131
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1
        C:/Users/oatazureuser/go/pkg/mod/go.opentelemetry.io/collector@v0.54.0/exporter/exporterhelper/queued_retry_inmemory.go:119
go.opentelemetry.io/collector/exporter/exporterhelper/internal.consumerFunc.consume
        C:/Users/oatazureuser/go/pkg/mod/go.opentelemetry.io/collector@v0.54.0/exporter/exporterhelper/internal/bounded_memory_queue.go:82
go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func2
        C:/Users/oatazureuser/go/pkg/mod/go.opentelemetry.io/collector@v0.54.0/exporter/exporterhelper/internal/bounded_memory_queue.go:69

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 20 (10 by maintainers)

Most upvoted comments

Glad to hear it’s working! I’m still investigating what can be done for sources we don’t control, such as journald or logs in files.

@jpkrohling based on your feedback about the missing resources/attributes, I was able to get my system up and running by adjusting my client code and my otelcol configuration.

After re-reading the README.md, I can see where I went wrong and, in my case specifically, this isn’t a bug. It’s user error.

In terms of details, I added an attribute to my ::Log() method call (note the addition of the “key”: “value” attribute).

logger->Log(opentelemetry::logs::Severity::kDebug, "body", {{"key", "value"}}, ctx.trace_id(), ctx.span_id(),
              ctx.trace_flags(),
              opentelemetry::common::SystemTimestamp(std::chrono::system_clock::now()));

And I needed to adjust my otelcol configuration to map the “key” key from the opentelemetry-cpp client to a field that will be used by loki.

exporters:
  loki:
    endpoint: "http://localhost:3100/loki/api/v1/push"
    format: json
    labels:
      attributes:
        key: "loki_key"
    tls:
      insecure: true

Everything is basically working for me now. Thanks for all the help @jpkrohling !

I was able to get the same error on a simple setup with journald receiver:

receivers:
  journald:
    directory: /var/log/journal/a04e3a44cdd740f88d6a7ae3bb8c70cf/

exporters:
  logging:
    loglevel: debug
  loki:
    endpoint: http://localhost:3100
    tls:
      insecure: true
    labels:
      attributes:
        host: localhost

processors:
extensions:

service:
  extensions:
  pipelines:
    logs:
      receivers: [journald]
      processors: []
      exporters: [logging, loki]

My initial investigation shows that the problem is that the incoming data points (ie, log entries) do not have any attributes or resource attributes, causing them to be dropped.

I’ll get in touch with the Loki team here at Grafana Labs to see what’s the behavior of Promtail, so that we can align them. I’ll post updates here as soon as I have them.

My bad. I was thinking the labels were a key/value mapping, but they are a key translation using values already linked to the log record.

I appreciate the help everyone.

Here is the dummy config which worked.

receivers:
  filelog:
    include_file_name: false
    start_at: beginning
    include: ['path/to/file']

processors:
  batch:
  resource:
    attributes:
      - key: service.host
        value: "host"
        action: insert

exporters:
  logging:
    logLevel: debug

  loki:
    endpoint: "http://localhost:3100/loki/api/v1/push"
    tls:
      insecure: true
    labels:
      resource:
        service.host: "service_host"

service:
  pipelines:
    logs:
      receivers: [filelog]
      processors: [resource,batch]
      exporters: [logging,loki]

@Tornhoof, your problem seems related to the comment I made earlier in this thread:

My initial investigation shows that the problem is that the incoming data points (ie, log entries) do not have any attributes or resource attributes, causing them to be dropped.

This won’t be necessary anymore after the refactoring of the Loki exporter that is underway, see https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/12873