prometheus: Error: http: superfluous response.WriteHeader call
Bug Report
What did you see?
Multiple errors of http: superfluous response.WriteHeader call
in the logs - during normal operation.
level=info ts=2019-10-14T11:01:57.535Z caller=head.go:666 component=tsdb msg="WAL checkpoint complete" first=90900 last=90995 duration=58.045866315s
level=error ts=2019-10-14T11:02:03.505Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2019-10-14T11:02:06.599Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2019-10-14T11:02:07.848Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
[...]
Environment
-
System information:
busybox Docker image running the Prometheus binary.
-
Prometheus version:
level=info ts=2019-10-09T12:24:15.591Z caller=main.go:332 msg="Starting Prometheus" version="(version=2.13.0, branch=HEAD, revision=6ea4252299f542669aca11860abc2192bdc7bede)"
About this issue
- Original URL
- State: open
- Created 5 years ago
- Reactions: 17
- Comments: 59 (22 by maintainers)
Commits related to this issue
- Update client_golangt to fix promhttp error handling Fixes #6139 Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu> — committed to roidelapluie/prometheus by roidelapluie 4 years ago
- Update client_golang to fix promhttp error handling Fixes #6139 Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu> — committed to roidelapluie/prometheus by roidelapluie 4 years ago
- Update client_golang to fix promhttp error handling Fixes #6139 Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu> — committed to roidelapluie/prometheus by roidelapluie 4 years ago
- Update client_golang to fix promhttp error handling (#6986) Fixes #6139 Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu> — committed to prometheus/prometheus by roidelapluie 4 years ago
- Update client_golang to fix promhttp error handling (#6986) Fixes #6139 Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu> — committed to scope-demo/prometheus by roidelapluie 4 years ago
I’m seeing this error also with Prometheus (v2.32.0) and Thanos (v0.22.0). It always coincides with a large dropoff in traffic from my Prometheus server (presumably scrapes stop happening? There’s gaps in my graphs from this server during those times), and its HA pair fails to scrape it, as well.
Thanos Sidecar has heartbeat timeouts during this period to
http://localhost:9090/api/v1/status/config
so it seems that Prometheus just stops responding to any HTTP requests (even the metric handler, as seen above) when these log entries are generated.I can consistently replicate the issue in my Thanos + Prometheus set up by setting the
--store.response-timeout
flag of Thanos Query to something absurdly low (e.g.1s
) and then running a query that will take longer than that (e.g.max(node_cpu_seconds_total)
). The cancelled context results in the log line we’re seeing.Unfortunately, I think this indicates that this is more of a symptom than the cause of an issue. In other words, my issues described above almost certainly aren’t caused by whatever’s causing this error to print. Instead, some other issue is happening which is causing requests to be interrupted (probably by timeouts) and then this log line gets printed.
I got the same error just now
prometheus:v2.27.1 thanos:v0.20.2
Metricbeat showed high CPU usage on thanos-querier
Just curious does superfluous issue been fixed in any off the latest versions of Prometheus?
We also experience this issue (also using Thanos). The only thing I can add to this conversation is that Thanos Query cannot even find the Prometheus instance around the time we have this error message. Other then that, we are also still debugging…
I have also encountered the same issue with my HA/redundant Prometheus deployment, where I have two identically configured Prometheus instances federated via Thanos. Interestingly this error occurred on both Prometheus instances at the same time (within a couple of seconds).
Experiencing the same issue - bunches of
lines with
2.17.1
version.(I’ll try to find out more info…)