opentelemetry-collector-contrib: failed to translate metric from otel-python histogram using prometheusexporter
Seeing this error message:
otel-collector_1 | 2022-08-20T22:03:26.430Z error prometheusexporter@v0.58.0/accumulator.go:105 failed to translate metric {"kind": "exporter", "data_type": "metrics", "name": "prometheus", "data_type": "\u0000", "metric_name": "graphql.api.request.time"}
otel-collector_1 | github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter.(*lastValueAccumulator).addMetric
otel-collector_1 | github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter@v0.58.0/accumulator.go:105
otel-collector_1 | github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter.(*lastValueAccumulator).Accumulate
otel-collector_1 | github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter@v0.58.0/accumulator.go:82
otel-collector_1 | github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter.(*collector).processMetrics
otel-collector_1 | github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter@v0.58.0/collector.go:66
otel-collector_1 | github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter.(*prometheusExporter).ConsumeMetrics
otel-collector_1 | github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter@v0.58.0/prometheus.go:88
And it looks like this is due to a histogram metric being reported by the PeriodicExportingMetricReader
exporting my histogram metric which hasn’t had any data reported in the past period. Below is the output from a file exporter for some more debugging information. I believe this will be fixed by #9006 but let me know if that is incorrect.
Metrics dump
[
{
"resourceMetrics":
[
{
"resource":
{
"attributes":
[
{
"key": "telemetry.sdk.language",
"value":
{
"stringValue": "python"
}
},
{
"key": "telemetry.sdk.name",
"value":
{
"stringValue": "opentelemetry"
}
},
{
"key": "telemetry.sdk.version",
"value":
{
"stringValue": "1.12.0"
}
},
{
"key": "service.name",
"value":
{
"stringValue": "test"
}
}
]
},
"scopeMetrics":
[
{
"scope":
{
"name": "graphql-api",
"version": "1.0.0"
},
"metrics":
[
{
"name": "graphql.api.request.time",
"description": "Request time metrics for GraphQL API.",
"unit": "ms",
"histogram":
{
"dataPoints":
[
{
"attributes":
[
{
"key": "status",
"value":
{
"stringValue": "Success"
}
}
],
"startTimeUnixNano": "1660706168040247000",
"timeUnixNano": "1660706170294892000",
"count": "2",
"sum": 0.0000030994415283203125,
"bucketCounts":
[
"0",
"2",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0"
],
"explicitBounds":
[
0,
5,
10,
25,
50,
75,
100,
250,
500,
1000
],
"min": 0.0000011920928955078125,
"max": 0.0000019073486328125
}
],
"aggregationTemporality": "AGGREGATION_TEMPORALITY_CUMULATIVE"
}
},
{
"name": "graphql.api.requests",
"description": "Usage metrics for GraphQL API.",
"unit": "1",
"sum":
{
"dataPoints":
[
{
"attributes":
[
{
"key": "status",
"value":
{
"stringValue": "Success"
}
}
],
"startTimeUnixNano": "1660706168040307000",
"timeUnixNano": "1660706170294892000",
"asInt": "2"
}
],
"aggregationTemporality": "AGGREGATION_TEMPORALITY_CUMULATIVE",
"isMonotonic": true
}
}
]
}
]
}
]
},
{
"resourceMetrics":
[
{
"resource":
{
"attributes":
[
{
"key": "telemetry.sdk.language",
"value":
{
"stringValue": "python"
}
},
{
"key": "telemetry.sdk.name",
"value":
{
"stringValue": "opentelemetry"
}
},
{
"key": "telemetry.sdk.version",
"value":
{
"stringValue": "1.12.0"
}
},
{
"key": "service.name",
"value":
{
"stringValue": "test"
}
}
]
},
"scopeMetrics":
[
{
"scope":
{
"name": "graphql-api",
"version": "1.0.0"
},
"metrics":
[
{
"name": "graphql.api.request.time",
"description": "Request time metrics for GraphQL API.",
"unit": "ms"
},
{
"name": "graphql.api.requests",
"description": "Usage metrics for GraphQL API.",
"unit": "1",
"sum":
{
"dataPoints":
[
{
"attributes":
[
{
"key": "status",
"value":
{
"stringValue": "Success"
}
}
],
"startTimeUnixNano": "1660706168040307000",
"timeUnixNano": "1660706185345493000",
"asInt": "2"
}
],
"aggregationTemporality": "AGGREGATION_TEMPORALITY_CUMULATIVE",
"isMonotonic": true
}
}
]
}
]
}
]
}
]
About this issue
- Original URL
- State: open
- Created 2 years ago
- Reactions: 1
- Comments: 17 (9 by maintainers)
Any update on this?
Still getting this error when using a GRPC Revicer and a Prometheus Exporter
I wanted to mention, in the case it may help in any way, that i’m also seeing this
failed to translate metric
error when using a different version of the prometheusexporter against metric data coming through an otlp-grpc receiver that’s being provided data from an app through go.opentelemetry.io/otel/exporters/otlp/otlpgrpc.If this should be reported through a new issue, or there’s any other details y’all may be interested in let me know.
Here’s an example of the error: