prometheus: Error: http: superfluous response.WriteHeader call

Bug Report

What did you see?

Multiple errors of http: superfluous response.WriteHeader call in the logs - during normal operation.

level=info ts=2019-10-14T11:01:57.535Z caller=head.go:666 component=tsdb msg="WAL checkpoint complete" first=90900 last=90995 duration=58.045866315s
level=error ts=2019-10-14T11:02:03.505Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2019-10-14T11:02:06.599Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2019-10-14T11:02:07.848Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
[...]

Environment

  • System information:

    busybox Docker image running the Prometheus binary.

  • Prometheus version:

    level=info ts=2019-10-09T12:24:15.591Z caller=main.go:332 msg="Starting Prometheus" version="(version=2.13.0, branch=HEAD, revision=6ea4252299f542669aca11860abc2192bdc7bede)"

About this issue

  • Original URL
  • State: open
  • Created 5 years ago
  • Reactions: 17
  • Comments: 59 (22 by maintainers)

Commits related to this issue

Most upvoted comments

I’m seeing this error also with Prometheus (v2.32.0) and Thanos (v0.22.0). It always coincides with a large dropoff in traffic from my Prometheus server (presumably scrapes stop happening? There’s gaps in my graphs from this server during those times), and its HA pair fails to scrape it, as well.

Thanos Sidecar has heartbeat timeouts during this period to http://localhost:9090/api/v1/status/config so it seems that Prometheus just stops responding to any HTTP requests (even the metric handler, as seen above) when these log entries are generated.

I can consistently replicate the issue in my Thanos + Prometheus set up by setting the --store.response-timeout flag of Thanos Query to something absurdly low (e.g. 1s) and then running a query that will take longer than that (e.g. max(node_cpu_seconds_total)). The cancelled context results in the log line we’re seeing.

Unfortunately, I think this indicates that this is more of a symptom than the cause of an issue. In other words, my issues described above almost certainly aren’t caused by whatever’s causing this error to print. Instead, some other issue is happening which is causing requests to be interrupted (probably by timeouts) and then this log line gets printed.

I got the same error just now

prometheus:v2.27.1 thanos:v0.20.2

level=error ts=2021-06-10T03:04:22.256Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:17" msg=)
level=error ts=2021-06-10T03:04:22.259Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:17" msg=)
level=warn ts=2021-06-10T03:04:22.199Z caller=main.go:653 msg="Received SIGTERM, exiting gracefully..."
level=info ts=2021-06-10T03:04:22.278Z caller=main.go:676 msg="Stopping scrape discovery manager..."
level=info ts=2021-06-10T03:04:22.278Z caller=main.go:690 msg="Stopping notify discovery manager..."
level=info ts=2021-06-10T03:04:22.278Z caller=main.go:712 msg="Stopping scrape manager..."
level=info ts=2021-06-10T03:04:22.279Z caller=main.go:672 msg="Scrape discovery manager stopped"
level=info ts=2021-06-10T03:04:22.279Z caller=main.go:686 msg="Notify discovery manager stopped"
level=error ts=2021-06-10T03:04:22.393Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:17" msg=)
level=error ts=2021-06-10T03:04:22.765Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:17" msg=)
level=info ts=2021-06-10T03:04:23.284Z caller=main.go:957 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
level=error ts=2021-06-10T03:04:24.828Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:17" msg=)
level=info ts=2021-06-10T03:04:25.284Z caller=main.go:706 msg="Scrape manager stopped"
level=info ts=2021-06-10T03:04:25.288Z caller=main.go:988 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=2.003510045s remote_storage=2.496µs web_handler=5.067µs query_engine=1.253µs scrape=1.951652936s scrape_sd=1.64614ms notify=19.449µs notify_sd=13.61µs rules=10.101µs
level=info ts=2021-06-10T03:04:25.288Z caller=manager.go:934 component="rule manager" msg="Stopping rule manager..."
level=info ts=2021-06-10T03:04:25.288Z caller=manager.go:944 component="rule manager" msg="Rule manager stopped"
level=error ts=2021-06-10T03:04:28.262Z caller=db.go:780 component=tsdb msg="compaction failed" err="compact head: persist head block: 2 errors: populate block: add symbol: context canceled; context canceled"
level=info ts=2021-06-10T03:04:28.369Z caller=notifier.go:601 component=notifier msg="Stopping notification manager..."
level=info ts=2021-06-10T03:04:28.369Z caller=main.go:885 msg="Notifier manager stopped"
level=info ts=2021-06-10T03:04:28.370Z caller=main.go:897 msg="See you next time!"

Metricbeat showed high CPU usage on thanos-querier

image

Just curious does superfluous issue been fixed in any off the latest versions of Prometheus?

We also experience this issue (also using Thanos). The only thing I can add to this conversation is that Thanos Query cannot even find the Prometheus instance around the time we have this error message. Other then that, we are also still debugging…

I have also encountered the same issue with my HA/redundant Prometheus deployment, where I have two identically configured Prometheus instances federated via Thanos. Interestingly this error occurred on both Prometheus instances at the same time (within a couple of seconds).

Experiencing the same issue - bunches of

level=error ts=... caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/prometheus/prometheus/vendor/github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)

lines with 2.17.1 version.

(I’ll try to find out more info…)