prometheus: checksum mismatch breaking Grafana queries
Bug Report
What did you do? Updated Prometheus 2.12.0 to 2.15.0. Look at queries in grafana. Sometimes graphs show errors like:
Usually seems to happen only when I had a long time window. The same error occurs on all queries intermittently.
What did you expect to see?
No errors.
What did you see instead? Under which circumstances?
Checksum mismatch error
Environment
Prometheus 2.15.0 running from prometheus operator on Kubernetes.
Grafana 6.4.3 and 6.5.2 tested. The problem seems independent of Grafana version. Its highly possible this is a Grafana issue, but because it only is happening with Prometheus 2.15.0 I felt it is more likely Prometheus
-
Prometheus version:
2.15.0
-
Prometheus configuration file:
enableAdminAPI: false
image: docker.io/prom/prometheus:v2.15.0
podMetadata:
annotations:
sidecar.istio.io/inject: "false"
labels:
app: prometheus
resources:
requests:
memory: 32Gi
retention: 72h
scrapeInterval: 15s
secrets:
- istio.prometheus
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: prometheus
serviceMonitorNamespaceSelector:
any: true
serviceMonitorSelector:
any: true
storage:
volumeClaimTemplate:
selector: {}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
storageClassName: ssd
version: v2.15.0
- Logs:
level=info ts=2019-12-23T15:23:36.036Z caller=main.go:330 msg="Starting Prometheus" version="(version=2.15.0, branch=HEAD, revision=ec1868b0267d13cb5967286fd5ec6afff507905b)"
level=info ts=2019-12-23T15:23:36.036Z caller=main.go:331 build_context="(go=go1.13.5, user=root@240f2f89177f, date=20191223-12:03:32)"
level=info ts=2019-12-23T15:23:36.036Z caller=main.go:332 host_details="(Linux 4.19.76+ #1 SMP Tue Oct 8 23:17:06 PDT 2019 x86_64 prometheus-prometheus-0 (none))"
level=info ts=2019-12-23T15:23:36.036Z caller=main.go:333 fd_limits="(soft=1048576, hard=1048576)"
level=info ts=2019-12-23T15:23:36.036Z caller=main.go:334 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2019-12-23T15:23:36.041Z caller=main.go:648 msg="Starting TSDB ..."
level=info ts=2019-12-23T15:23:36.041Z caller=web.go:506 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2019-12-23T15:23:36.043Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1576843200000 maxt=1576864800000 ulid=01DWJHSC5B25FB060C872VS5KT
level=info ts=2019-12-23T15:23:36.044Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1576864800000 maxt=1576886400000 ulid=01DWK6D2EKFE2C3TP797GX5WZR
level=info ts=2019-12-23T15:23:36.044Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1576886400000 maxt=1576908000000 ulid=01DWKTZKVDGYQPN3ZEPMXQB1S2
level=info ts=2019-12-23T15:23:36.045Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1576908000000 maxt=1576929600000 ulid=01DWMFJMFK89AZD58SC98CT2P6
level=info ts=2019-12-23T15:23:36.046Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1576929600000 maxt=1576951200000 ulid=01DWN45V7F53PBN9883SRSQ3DZ
level=info ts=2019-12-23T15:23:36.046Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1576951200000 maxt=1576972800000 ulid=01DWNRS0P03AXW9ECVT65DMQKH
level=info ts=2019-12-23T15:23:36.047Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1576972800000 maxt=1576994400000 ulid=01DWPDC6X4S0MHH5WQ1WZPYYT5
level=info ts=2019-12-23T15:23:36.048Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1576994400000 maxt=1577016000000 ulid=01DWQ1ZDCQRJCKKA1KDK7RFHP2
level=info ts=2019-12-23T15:23:36.048Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1577016000000 maxt=1577037600000 ulid=01DWQPJKNHN14E0B2QTY7587XR
level=info ts=2019-12-23T15:23:36.049Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1577037600000 maxt=1577059200000 ulid=01DWRB5RV4RG7DEG82JXJXJ5XR
level=info ts=2019-12-23T15:23:36.050Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1577059200000 maxt=1577080800000 ulid=01DWRZRYETPY9GEMC1P9BV4ZHM
level=info ts=2019-12-23T15:23:36.050Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1577102400000 maxt=1577109600000 ulid=01DWSMB9CEQ2VNBBBG9PBN1GJ6
level=info ts=2019-12-23T15:23:36.051Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1577080800000 maxt=1577102400000 ulid=01DWSMC5BKR6C9DFXZT1B5M4KW
level=info ts=2019-12-23T15:23:37.101Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"
level=info ts=2019-12-23T15:23:44.127Z caller=head.go:608 component=tsdb msg="WAL checkpoint loaded"
level=info ts=2019-12-23T15:23:44.352Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1282 maxSegment=1323
level=info ts=2019-12-23T15:23:45.849Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1283 maxSegment=1323
level=info ts=2019-12-23T15:23:47.110Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1284 maxSegment=1323
level=info ts=2019-12-23T15:23:48.315Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1285 maxSegment=1323
level=info ts=2019-12-23T15:23:49.544Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1286 maxSegment=1323
level=info ts=2019-12-23T15:23:50.758Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1287 maxSegment=1323
level=info ts=2019-12-23T15:23:51.984Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1288 maxSegment=1323
level=info ts=2019-12-23T15:23:53.204Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1289 maxSegment=1323
level=info ts=2019-12-23T15:23:54.577Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1290 maxSegment=1323
level=info ts=2019-12-23T15:23:55.747Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1291 maxSegment=1323
level=info ts=2019-12-23T15:23:56.912Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1292 maxSegment=1323
level=info ts=2019-12-23T15:23:58.220Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1293 maxSegment=1323
level=info ts=2019-12-23T15:23:59.402Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1294 maxSegment=1323
level=info ts=2019-12-23T15:24:00.585Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1295 maxSegment=1323
level=info ts=2019-12-23T15:24:01.731Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1296 maxSegment=1323
level=info ts=2019-12-23T15:24:02.903Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1297 maxSegment=1323
level=info ts=2019-12-23T15:24:04.170Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1298 maxSegment=1323
level=info ts=2019-12-23T15:24:05.360Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1299 maxSegment=1323
level=info ts=2019-12-23T15:24:05.550Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1300 maxSegment=1323
level=info ts=2019-12-23T15:24:06.791Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1301 maxSegment=1323
level=info ts=2019-12-23T15:24:08.068Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1302 maxSegment=1323
level=info ts=2019-12-23T15:24:09.541Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1303 maxSegment=1323
level=info ts=2019-12-23T15:24:10.773Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1304 maxSegment=1323
level=info ts=2019-12-23T15:24:11.991Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1305 maxSegment=1323
level=info ts=2019-12-23T15:24:13.230Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1306 maxSegment=1323
level=info ts=2019-12-23T15:24:14.451Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1307 maxSegment=1323
level=info ts=2019-12-23T15:24:15.741Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1308 maxSegment=1323
level=info ts=2019-12-23T15:24:17.176Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1309 maxSegment=1323
level=info ts=2019-12-23T15:24:18.505Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1310 maxSegment=1323
level=info ts=2019-12-23T15:24:19.870Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1311 maxSegment=1323
level=info ts=2019-12-23T15:24:21.211Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1312 maxSegment=1323
level=info ts=2019-12-23T15:24:22.727Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1313 maxSegment=1323
level=info ts=2019-12-23T15:24:24.101Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1314 maxSegment=1323
level=info ts=2019-12-23T15:24:26.065Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1315 maxSegment=1323
level=info ts=2019-12-23T15:24:27.722Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1316 maxSegment=1323
level=info ts=2019-12-23T15:24:29.508Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1317 maxSegment=1323
level=info ts=2019-12-23T15:24:29.789Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1318 maxSegment=1323
level=info ts=2019-12-23T15:24:31.181Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1319 maxSegment=1323
level=info ts=2019-12-23T15:24:32.527Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1320 maxSegment=1323
level=info ts=2019-12-23T15:24:33.866Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1321 maxSegment=1323
level=info ts=2019-12-23T15:24:34.288Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1322 maxSegment=1323
level=info ts=2019-12-23T15:24:34.288Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1323 maxSegment=1323
level=info ts=2019-12-23T15:24:37.781Z caller=main.go:663 fs_type=EXT4_SUPER_MAGIC
level=info ts=2019-12-23T15:24:37.781Z caller=main.go:664 msg="TSDB started"
level=info ts=2019-12-23T15:24:37.781Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=info ts=2019-12-23T15:24:37.787Z caller=kubernetes.go:190 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-12-23T15:24:37.789Z caller=kubernetes.go:190 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-12-23T15:24:37.790Z caller=kubernetes.go:190 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-12-23T15:24:37.791Z caller=main.go:762 msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=info ts=2019-12-23T15:24:37.791Z caller=main.go:617 msg="Server is ready to receive web requests."
level=info ts=2019-12-23T15:24:37.791Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=info ts=2019-12-23T15:24:37.794Z caller=kubernetes.go:190 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-12-23T15:24:37.796Z caller=kubernetes.go:190 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-12-23T15:24:37.796Z caller=kubernetes.go:190 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-12-23T15:24:37.797Z caller=main.go:762 msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=warn ts=2019-12-23T15:31:48.831Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3713677 (3713846)"
level=warn ts=2019-12-23T15:37:48.858Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3713678 (3716173)"
level=warn ts=2019-12-23T15:38:24.836Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3717216 (3717369)"
level=warn ts=2019-12-23T15:45:40.870Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3720129 (3721529)"
level=warn ts=2019-12-23T15:50:39.847Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3720377 (3723860)"
level=warn ts=2019-12-23T15:58:10.887Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3724663 (3727656)"
level=warn ts=2019-12-23T15:58:23.858Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3727195 (3728213)"
level=warn ts=2019-12-23T16:03:40.901Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3731293 (3732447)"
level=warn ts=2019-12-23T16:12:23.911Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3734939 (3738293)"
level=warn ts=2019-12-23T16:12:46.867Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3731392 (3736591)"
level=warn ts=2019-12-23T16:21:14.889Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3741370 (3743051)"
level=warn ts=2019-12-23T16:22:02.924Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3741069 (3742902)"
level=warn ts=2019-12-23T16:30:39.937Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3747440 (3748262)"
level=warn ts=2019-12-23T16:31:10.899Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3746672 (3750325)"
level=warn ts=2019-12-23T16:38:14.909Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3753680 (3755651)"
level=warn ts=2019-12-23T16:38:48.951Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3753085 (3755272)"
level=warn ts=2019-12-23T16:45:04.905Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3761588 (3762011)"
level=warn ts=2019-12-23T16:45:14.918Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3759420 (3760943)"
level=warn ts=2019-12-23T16:46:50.962Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3758711 (3762783)"
level=warn ts=2019-12-23T16:53:05.973Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3766286 (3767718)"
level=warn ts=2019-12-23T16:54:38.929Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3765637 (3767760)"
level=info ts=2019-12-23T17:00:27.397Z caller=compact.go:496 component=tsdb msg="write block" mint=1577109600000 maxt=1577116800000 ulid=01DWSV70PMSRSRMGVACWMYHASP duration=27.312990828s
level=info ts=2019-12-23T17:00:29.893Z caller=head.go:668 component=tsdb msg="head GC completed" duration=1.876544572s
level=info ts=2019-12-23T17:00:44.087Z caller=head.go:738 component=tsdb msg="WAL checkpoint complete" first=1282 last=1300 duration=14.193873259s
level=warn ts=2019-12-23T17:02:59.019Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3770941 (3775017)"
level=warn ts=2019-12-23T17:03:24.939Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3772527 (3774247)"
level=warn ts=2019-12-23T17:10:56.031Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3778153 (3778411)"
level=warn ts=2019-12-23T17:11:16.949Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3778532 (3780292)"
level=warn ts=2019-12-23T17:17:14.960Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3783606 (3784448)"
level=warn ts=2019-12-23T17:17:54.043Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3783199 (3784799)"
level=warn ts=2019-12-23T17:24:31.057Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3788579 (3791413)"
level=warn ts=2019-12-23T17:30:14.973Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3788604 (3794280)"
level=warn ts=2019-12-23T17:36:19.073Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3794656 (3799972)"
level=warn ts=2019-12-23T17:36:30.985Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3797542 (3799119)"
level=warn ts=2019-12-23T17:44:50.086Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3803470 (3806690)"
level=warn ts=2019-12-23T17:46:30.997Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3803543 (3806639)"
level=warn ts=2019-12-23T17:53:05.006Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3811253 (3811651)"
level=warn ts=2019-12-23T17:58:18.102Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3809896 (3814758)"
level=warn ts=2019-12-23T17:59:41.016Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3815773 (3816361)"
level=warn ts=2019-12-23T18:04:29.113Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3818997 (3821098)"
level=warn ts=2019-12-23T18:04:52.025Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3819575 (3820109)"
level=warn ts=2019-12-23T18:13:39.125Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3824457 (3828039)"
level=warn ts=2019-12-23T18:14:27.050Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3824901 (3827401)"
level=warn ts=2019-12-23T18:24:23.061Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3832252 (3834691)"
level=warn ts=2019-12-23T18:24:55.142Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3831299 (3836598)"
level=warn ts=2019-12-23T18:31:29.071Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3839345 (3840664)"
level=warn ts=2019-12-23T18:34:24.157Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3839954 (3844042)"
level=warn ts=2019-12-23T18:40:46.081Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3844122 (3847661)"
level=warn ts=2019-12-23T18:41:56.169Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3847305 (3847622)"
level=warn ts=2019-12-23T18:45:49.091Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3850970 (3851413)"
level=warn ts=2019-12-23T18:50:47.186Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3852273 (3853641)"
level=warn ts=2019-12-23T18:54:56.102Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3856120 (3858073)"
level=warn ts=2019-12-23T18:56:04.200Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3858418 (3860240)"
level=info ts=2019-12-23T19:01:02.749Z caller=compact.go:496 component=tsdb msg="write block" mint=1577116800000 maxt=1577124000000 ulid=01DWT22QX3D5J9WX8DCSXYHY9T duration=1m2.714219493s
level=info ts=2019-12-23T19:01:08.124Z caller=head.go:668 component=tsdb msg="head GC completed" duration=4.501931613s
level=info ts=2019-12-23T19:01:23.102Z caller=head.go:738 component=tsdb msg="WAL checkpoint complete" first=1301 last=1319 duration=14.9778659s
level=warn ts=2019-12-23T19:01:28.111Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3862818 (3863289)"
level=warn ts=2019-12-23T19:08:29.217Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3863658 (3867972)"
level=warn ts=2019-12-23T19:10:31.122Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3866665 (3869994)"
level=warn ts=2019-12-23T19:16:58.132Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3873248 (3874898)"
level=warn ts=2019-12-23T19:18:02.230Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3871350 (3875326)"
level=warn ts=2019-12-23T19:25:19.142Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3879122 (3881389)"
level=warn ts=2019-12-23T19:25:20.241Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3878866 (3883101)"
level=warn ts=2019-12-23T19:31:38.153Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3886348 (3886604)"
level=warn ts=2019-12-23T19:38:29.258Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3886368 (3890678)"
level=warn ts=2019-12-23T19:40:55.163Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3889958 (3893287)"
level=warn ts=2019-12-23T19:45:57.173Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3896579 (3896933)"
level=warn ts=2019-12-23T19:47:01.271Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3894215 (3897853)"
level=warn ts=2019-12-23T19:54:24.283Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3901389 (3904144)"
level=warn ts=2019-12-23T19:54:31.183Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3901427 (3902769)"
level=warn ts=2019-12-23T19:55:34.025Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3903442 (3905278)"
level=warn ts=2019-12-23T20:03:18.193Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3907610 (3909709)"
level=warn ts=2019-12-23T20:05:30.059Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3911452 (3913189)"
level=warn ts=2019-12-23T20:05:47.300Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3907448 (3913313)"
level=warn ts=2019-12-23T20:11:36.203Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3914117 (3916437)"
level=warn ts=2019-12-23T20:13:04.313Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3916558 (3918081)"
level=warn ts=2019-12-23T20:17:06.213Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3919846 (3920249)"
level=warn ts=2019-12-23T20:23:48.224Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3924339 (3924884)"
level=warn ts=2019-12-23T20:27:32.327Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3921309 (3927978)"
level=warn ts=2019-12-23T20:31:05.234Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3929398 (3930592)"
level=warn ts=2019-12-23T20:35:57.339Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3931590 (3935537)"
level=warn ts=2019-12-23T20:39:56.245Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3933911 (3937073)"
level=warn ts=2019-12-23T20:44:08.353Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3938878 (3942078)"
level=warn ts=2019-12-23T20:46:24.255Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3940269 (3942367)"
level=warn ts=2019-12-23T20:46:43.106Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3942550 (3943581)"
level=warn ts=2019-12-23T20:51:58.367Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3945143 (3945788)"
level=warn ts=2019-12-23T20:54:23.266Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3947013 (3948264)"
level=warn ts=2019-12-23T20:55:52.146Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3950604 (3951586)"
level=warn ts=2019-12-23T21:00:07.380Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3950354 (3951832)"
level=info ts=2019-12-23T21:01:40.915Z caller=compact.go:496 component=tsdb msg="write block" mint=1577124000000 maxt=1577131200000 ulid=01DWT8YF4M7RG79E7JTEF8PRHE duration=1m40.895291774s
level=info ts=2019-12-23T21:01:49.633Z caller=head.go:668 component=tsdb msg="head GC completed" duration=7.149187217s
level=info ts=2019-12-23T21:02:09.420Z caller=head.go:738 component=tsdb msg="WAL checkpoint complete" first=1320 last=1341 duration=19.787341869s
level=info ts=2019-12-23T21:03:26.982Z caller=compact.go:441 component=tsdb msg="compact blocks" count=3 mint=1577102400000 maxt=1577124000000 ulid=01DWT92F5JBGS2XM7K8T5HZXP0 sources="[01DWSMB9CEQ2VNBBBG9PBN1GJ6 01DWSV70PMSRSRMGVACWMYHASP 01DWT22QX3D5J9WX8DCSXYHY9T]" duration=1m15.8605082s
level=warn ts=2019-12-23T21:05:57.279Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3953363 (3957393)"
level=warn ts=2019-12-23T21:06:29.392Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3956314 (3958479)"
level=warn ts=2019-12-23T21:11:59.291Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3961967 (3962052)"
level=warn ts=2019-12-23T21:15:19.406Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3961985 (3965879)"
level=warn ts=2019-12-23T21:17:02.301Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3965541 (3965630)"
level=warn ts=2019-12-23T21:24:15.311Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3969893 (3970656)"
level=warn ts=2019-12-23T21:27:20.422Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3969088 (3973187)"
level=warn ts=2019-12-23T21:33:03.437Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3976701 (3978388)"
level=warn ts=2019-12-23T21:33:54.321Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3975192 (3977714)"
level=warn ts=2019-12-23T21:40:45.331Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3982346 (3983135)"
level=warn ts=2019-12-23T21:47:27.341Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3986425 (3988160)"
level=warn ts=2019-12-23T21:47:44.456Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3981553 (3988395)"
level=warn ts=2019-12-23T21:53:21.354Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3992257 (3992465)"
level=warn ts=2019-12-23T21:53:41.468Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3991793 (3993815)"
level=warn ts=2019-12-23T21:59:17.365Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 3996667 (3996933)"
level=warn ts=2019-12-23T22:02:21.481Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 3997073 (3998563)"
level=warn ts=2019-12-23T22:06:29.375Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 4000363 (4002476)"
level=warn ts=2019-12-23T22:07:48.494Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 4003165 (4003424)"
level=warn ts=2019-12-23T22:15:32.506Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:263: watch of *v1.Pod ended with: too old resource version: 4007848 (4010734)"
ts=2019-12-23T15:23:36.287559283Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.31.1'."
level=info ts=2019-12-23T15:24:37.798968486Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=
level=info ts=2019-12-23T15:24:37.799088489Z caller=reloader.go:154 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 11
- Comments: 22 (12 by maintainers)
Commits related to this issue
- tsdb: Do not reuse the same crc32 hash between Chunk() calls The same crc32 object was called concurrently between chunks. As this object is only used in that function, create a new one each time the... — committed to roidelapluie/prometheus by roidelapluie 5 years ago
- Fixed race in Chunks method. Added regression test. Fixes #6512 Before (not deterministic result due to concurrency): ``` === RUN TestChunkReader_ConcurrentRead --- FAIL: TestChunkReader_Concurre... — committed to prometheus/prometheus by bwplotka 5 years ago
- Fixed race in Chunks method. Added regression test. Fixes #6512 Before (not deterministic result due to concurrency): ``` === RUN TestChunkReader_ConcurrentRead --- FAIL: TestChunkReader_Concurre... — committed to prometheus/prometheus by bwplotka 5 years ago
- Fixed race in Chunks method. (#6515) Added regression test. Fixes #6512 Before (not deterministic result due to concurrency): ``` === RUN TestChunkReader_ConcurrentRead --- FAIL: TestChunk... — committed to prometheus/prometheus by bwplotka 5 years ago
- Fixed race in Chunks method. (#6515) Added regression test. Fixes #6512 Before (not deterministic result due to concurrency): ``` === RUN TestChunkReader_ConcurrentRead --- FAIL: TestChunk... — committed to prometheus/prometheus by bwplotka 5 years ago
- Update prom to 2.15.1 Fixes https://github.com/prometheus/prometheus/issues/6512 — committed to howardjohn/istio-installer by howardjohn 5 years ago
- Update prom to 2.15.1 (#627) Fixes https://github.com/prometheus/prometheus/issues/6512 — committed to istio/installer by howardjohn 5 years ago
I get the feeling there is a problem with concurrency.
I have the following two loops:
Both loops run through multiple times without any error if I ran only one at a time. As soon as I started to run both parallel, I got the checksum mismatch errors.
EDIT: a concurrency problem also match an other observation of mine. Bigger grafana dashboards, with more panels and therefore more concurrent requests, are more often affected than smaller dashboards.