thanos: Can not see downsampled data
Hi @all,
I have some problems to see downsampled data.
Software involved
thanos, version 0.6.0-rc.0 (branch: HEAD, revision: 7f2200906ae112035ba4c973cf545a28f74cc9d5) build user: root@8b35e55d6107 build date: 20190712-11:25:23 go version: go1.12.5
prometheus, version 2.10.0 (branch: HEAD, revision: d20e84d0fb64aff2f62a977adc8cfb656da4e286) build user: root@a49185acd9b0 build date: 20190525-12:28:13 go version: go1.12.5
I try to downsample data on a regular base (15Minutes) with this command:
./thanos downsample --objstore.config-file=s3.yaml --http-address="0.0.0.0:10982
Additionally I do a compact run once a day
./thanos compact --data-dir /tmp/thanos-compact --objstore.config-file=s3.yaml --retention.resolution-raw=5d --retention.resolution-5m=31d --retention.resolution-1h=365d --http-address="0.0.0.0:10910" --log.level=debug
My expection is, that I can see data that are older than 5 days, but that those data are downsampled to 5 Minutes.
If I do a bucket inspect I get :
| ULID | FROM | UNTIL | RANGE | UNTIL-COMP | #SERIES | #SAMPLES | #CHUNKS | COMP-LEVEL | COMP-FAILED | LABELS | RESOLUTION | SOURCE |
|----------------------------|---------------------|---------------------|---------|------------|---------|-------------|------------|------------|-------------|--------------------------------------------------|------------|-----------|
| 01DFT8QA2HEJ0D56BXFC4G2JHX | 13-07-2019 02:00:00 | 15-07-2019 02:00:00 | 48h0m0s | 192h0m0s | 10,436 | 6,004,455 | 52,132 | 4 | false | monitor=infrastructure,region=xxx,replica=A | 5m0s | compactor |
| 01DFZJJZQ6F2ZF4F7JMKTB83FP | 15-07-2019 02:00:00 | 17-07-2019 02:00:00 | 48h0m0s | 192h0m0s | 10,772 | 6,039,189 | 52,783 | 4 | false | monitor=infrastructure,region=xxx,replica=A | 5m0s | compactor |
| 01DGFKN51Z900EJG969CB7TQ9K | 17-07-2019 02:00:00 | 19-07-2019 02:00:00 | 48h0m0s | 192h0m0s | 11,474 | 6,057,053 | 52,874 | 4 | false | monitor=infrastructure,region=xxx,replica=A | 5m0s | compactor |
| 01DGFKNSMRMCM55TEV1ZW291M3 | 19-07-2019 02:00:00 | 21-07-2019 02:00:00 | 48h0m0s | 192h0m0s | 10,484 | 6,027,606 | 52,341 | 4 | false | monitor=infrastructure,region=xxx,replica=A | 5m0s | compactor |
| 01DGFKMJQ85CBF88A4YYRP69H2 | 21-07-2019 02:00:00 | 23-07-2019 02:00:00 | 48h0m0s | -8h0m0s | 10,485 | 148,893,233 | 15,068,445 | 4 | false | monitor=infrastructure,region=xxx,replica=A | 0s | compactor |
| 01DGFKPEEDNNS4FJV0KJXDHVFP | 21-07-2019 02:00:00 | 23-07-2019 02:00:00 | 48h0m0s | 192h0m0s | 10,485 | 6,027,407 | 52,342 | 4 | false | monitor=infrastructure,region=xxx,replica=A | 5m0s | compactor |
| 01DGFKKDM7WFRV9QY6NSKM305D | 23-07-2019 02:00:00 | 23-07-2019 10:00:00 | 8h0m0s | 32h0m0s | 10,474 | 24,816,728 | 2,511,507 | 3 | false | monitor=infrastructure,region=xxx,replica=A | 0s | compactor |
| 01DGHYNAVCFB8QGB0YBWZYTBE3 | 23-07-2019 10:00:00 | 23-07-2019 18:00:00 | 8h0m0s | 32h0m0s | 10,505 | 24,694,878 | 2,501,100 | 3 | false | monitor=infrastructure,region=xxx,replica=A | 0s | compactor |
| 01DGHYNEBTQJD0K946KV9MC032 | 23-07-2019 18:00:00 | 24-07-2019 02:00:00 | 8h0m0s | 32h0m0s | 10,422 | 24,721,020 | 2,499,849 | 3 | false | monitor=infrastructure,region=xxx,replica=A | 0s | compactor |
| 01DGHYNHSFNQPGZ5JZ5R1F6QWH | 24-07-2019 02:00:00 | 24-07-2019 10:00:00 | 8h0m0s | 32h0m0s | 10,418 | 24,720,981 | 2,499,846 | 3 | false | monitor=infrastructure,region=xxx,replica=A | 0s | compactor |
| 01DGHYN7GPW95CVS8DE33RQCSP | 24-07-2019 10:00:00 | 24-07-2019 12:00:00 | 2h0m0s | 38h0m0s | 10,508 | 6,098,310 | 622,197 | 2 | false | monitor=infrastructure,region=xxx,replica=A | 0s | compactor |
| 01DGHZTG7FJ09ZECW3KC1P2648 | 24-07-2019 12:00:00 | 24-07-2019 14:00:00 | 2h0m0s | 38h0m0s | 10,526 | 6,229,326 | 630,894 | 2 | false | monitor=infrastructure,region=xxx,replica=A | 0s | compactor |
| 01DGHXTCB6ERSVGD38A3K7DPJX | 24-07-2019 14:00:00 | 24-07-2019 14:02:00 | 2m0s | 39h58m0s | 6,664 | 54,372 | 6,664 | 1 | false | monitor=infrastructure,region=xxx,replica=A | 0s | sidecar |
| 01DGHXXP576V3ZNJE1VF5KBPS8 | 24-07-2019 14:02:00 | 24-07-2019 14:04:00 | 2m0s | 39h58m0s | 10,473 | 39,960 | 10,473 | 1 | false | monitor=infrastructure,region=xxx,replica=A | 0s | sidecar |
| 01DGHY1BBDBD6MFB5NVR3E2E9X | 24-07-2019 14:04:00 | 24-07-2019 14:06:00 | 2m0s | 39h58m0s | 10,473 | 103,504 | 10,473 | 1 | false | monitor=infrastructure,region=xxx,replica=A | 0s | sidecar |
| 01DGHY50H7SFKNWZ65HXQY9NXB | 24-07-2019 14:06:00 | 24-07-2019 14:08:00 | 2m0s | 39h58m0s | 10,473 | 103,504 | 10,473 | 1 | false | monitor=infrastructure,region=xxx,replica=A | 0s | sidecar |
At that point it looks to me as if there are downsampled data back in time until July, 13th.
If I connect to the thanos query daemon via Browser and do a query, the oldest data I get is from July, 18th. In Grafana I can also see no data older that July, 18th.
Next I tried the new web UI Feature, but honestly - I have to idea what the meaning of the colors is.
Cany anyone give me a hint why I can’t see the downsampled data or where/how I can further investigate ?
Thanks a lot !
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 38 (19 by maintainers)
Ah thanks - I got it 😃
Well, if I choose a step size of 20m or below I get no datapoints, but with 25m and higher I do get datapoints.
Interestingly if I paste the URL Grafana send to Thanos in my Browser, I get no results with a step size of 20m, but I do get the results as json output with a step size of 25m.
I’ve opened a PR on Grafana to add Thanos
max_source_resolution
option to thanos query : https://github.com/grafana/grafana/pull/19121 Like in thanos query UI the is a auto value which meanmax_source_resolution=""
but it only return raw data 😦