thanos: store: Store gateway consuming lots of memory / OOMing

Thanos, Prometheus and Golang version used thanos v0.1.0rc2

What happened Thanos-store is consuming 50gb of memory during startup

What you expected to happen Thanos-store does not consume so much memory for starting up

Full logs to relevant components store:

level=debug ts=2018-07-27T15:51:21.415788856Z caller=cluster.go:132 component=cluster msg="resolved peers to following addresses" peers=100.96.232.51:10900,100.99.70.149:10900,100.110.182.241:10900,100.126.12.148:10900
level=debug ts=2018-07-27T15:51:21.416254389Z caller=store.go:112 msg="initializing bucket store"
level=warn ts=2018-07-27T15:52:05.28837034Z caller=bucket.go:240 msg="loading block failed" id=01CKE41VDSJMSAJMN6N6K8SABE err="new bucket block: load index cache: download index file: copy object to file: write /var/thanos/store/01CKE41VDSJMSAJMN6N6K8SABE/index: cannot allocate memory"
level=warn ts=2018-07-27T15:52:05.293692332Z caller=bucket.go:240 msg="loading block failed" id=01CKE41VE4XXTN9N55YPCJSPP2 err="new bucket block: load index cache: download index file: copy object to file: write /var/thanos/store/01CKE41VE4XXTN9N55YPCJSPP2/index: cannot allocate memory"

Anything else we need to know Some time after initialization the ram usage goes down to normal levels, something around 8Gb

Another thing that’s happening is that my thanos-compactor consumer way too much ram memory as well, the last time it ran, it used up to 60Gb of memory.

I run store with this args:

      containers:
      - args:
        - store
        - --log.level=debug
        - --tsdb.path=/var/thanos/store
        - --s3.endpoint=s3.amazonaws.com
        - --s3.access-key=xxx
        - --s3.bucket=xxx
        - --cluster.peers=thanos-peers.monitoring.svc.cluster.local:10900
        - --index-cache-size=2GB
        - --chunk-pool-size=8GB

Environment:

  • OS (e.g. from /etc/os-release): kubernetes running on debian

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 9
  • Comments: 41 (24 by maintainers)

Commits related to this issue

Most upvoted comments

I think it would be a lot improved already by just providing guidance on sizing of chunk pool and index cache sizes. If the grafana dashboards provided also included enough to figure out more what was going on and how close one was to limits, that would also be helpful.

☝️ Deleted the comment as it does not help to resolve this particular issue for the community (:

Let’s get back to this.

We need better OOM flow for our store gateway. Some improvements that needs to be done:

  • Move out ot summaries for metric with size of data fetched from bucket
  • Enable grpc Server msg size histograms
  • Document how to use sample limit for OOM prevention
  • Have a clear understanding on edge cases for chunk pool and index cache. Currently we can see that memory usage is way beyond chunk Pool size + index cache size which is unexpected. This means “leak” somewhere else or byte ranges getting out of chunk pool hardcoded ranges. We need to take a look on this as well.

Lot’s of work, so help is wanted (: In separate thread we are working on Querier cache, but that’s just hiding the actual problem (:

cc @mjd95 @devnev

In case people still needs this, you can now test with the container v0.11.0-rc.1. It’s working correctly for us on AWS.

Got this master-2020-01-25-cf4e4500 running for some time. 50% memory improvement. Great work and thanks to all people involved. chart

We need to move Thanos to Go 1.12.5: https://github.com/prometheus/prometheus/issues/5524

TL;DR - We are currently seeing thanos-store consuming incredibly large amounts of memory during initial sync and then being OOM killed. It is not releasing any memory as it is performing the initial sync and there is very likely a memory leak. Memory leak is likely to be occurring in https://github.com/improbable-eng/thanos/blob/v0.1.0/pkg/block/index.go#L105-L154

Thanos, Prometheus and Golang version used thanos-store 0.1.0, Golang 1.11 (built with quay.io/prometheus/golang-builder:1.11-base)

What happened thanos-store is consuming 32gb of memory during initial sync, then being OOM (out of memory) killed

What you expected to happen thanos-store not to use this much memory on initial sync and to progress past the initial sync

Full logs to relevant components No logs are emitted whilst the initial sync is occurring, see graphs below

Anything else we need to know Here is a graph of the total memory usage (cache + rss), rss memory usage and cache memory usage: screen shot 2018-10-30 at 11 40 23 am We have Kubernetes memory limits on the thanos-store container set to 32Gb, which is why it is eventually killed when it reaches this point.

Our thanos S3 bucket is currently 488.54404481872916G, 15078 objects in size.

We’ve noticed that thanos-store doesn’t progress past the InitialSync function - https://github.com/improbable-eng/thanos/blob/v0.1.0/cmd/thanos/store.go#L113 and exceed the memory limits of the container before finishing.

We’ve modified the goroutine count for how many blocks are being processed concurrently. It is currently hardcoded 20, but by changing it to a much lower number, e.g. 1, we can have thanos-store last longer before being OOM killed. Although it does take longer to do the InitialSync - https://github.com/improbable-eng/thanos/blob/v0.1.0/pkg/store/bucket.go#L231

The goroutine count for SyncBlocksshould really be a configurable option as well, hard coding it to 20 is not ideal.

Through some debugging, we’ve identified the loading of the index cache as the location of the memory leak - https://github.com/improbable-eng/thanos/blob/v0.1.0/pkg/store/bucket.go#L1070

By commenting out that function from the newBucketBlock function, thanos-store is able to progress past the InitialSync, (albeit without any index caches) and consumes very little memory.

We then ran some pprof heap analysis on the thanos-store as the memory leak was occurring and it identified block.ReadIndexCache as consuming alot of memory, see image below of the pprof heap graph profile002

The function in question - https://github.com/improbable-eng/thanos/blob/v0.1.0/pkg/block/index.go#L105-L154. The heap graph above suggests that the leak is in the json encoding/decoding of the index file and for some reason is not releasing memory.

Any update on this? We can’t use Thanos at the scale that we want to because of this.

hi @Bplotka I do actually, it also uses tons of memory (~60Gb) in the last run, is this normal?

Try just a new release without the flag. This error which is really client not being able to talk to S3 does not have anything to do with the experimental feature. (: It might be miconfiguration.

I am getting the below error with thanos store when using latest master branch docker image(quay.io/thanos/thanos:master-2020-01-25-cf4e4500) and by enabling --experimental.enable-index-header flag. I am having kubernetes for thanos deployment deployment.

level=debug ts=2020-02-13T12:52:08.756136456Z caller=main.go:101 msg="maxprocs: Updating GOMAXPROCS=[4]: determined from CPU quota"
level=info ts=2020-02-13T12:52:08.75644186Z caller=main.go:149 msg="Tracing will be disabled"
level=info ts=2020-02-13T12:52:08.756582213Z caller=factory.go:43 msg="loading bucket configuration"
level=info ts=2020-02-13T12:52:08.757133644Z caller=inmemory.go:167 msg="created in-memory index cache" maxItemSizeBytes=131072000 maxSizeBytes=2147483648 maxItems=math.MaxInt64
level=info ts=2020-02-13T12:52:08.757292099Z caller=store.go:223 msg="index-header instead of index-cache.json enabled"
level=info ts=2020-02-13T12:52:08.757417647Z caller=options.go:20 protocol=gRPC msg="disabled TLS, key and cert must be set to enable"
level=info ts=2020-02-13T12:52:08.757656237Z caller=store.go:297 msg="starting store node"
level=info ts=2020-02-13T12:52:08.757767835Z caller=prober.go:127 msg="changing probe status" status=healthy
level=info ts=2020-02-13T12:52:08.757806998Z caller=http.go:53 service=http/server component=store msg="listening for requests and metrics" address=0.0.0.0:10902
level=info ts=2020-02-13T12:52:08.757797309Z caller=store.go:252 msg="initializing bucket store"
level=info ts=2020-02-13T12:52:08.868409206Z caller=prober.go:107 msg="changing probe status" status=ready
level=info ts=2020-02-13T12:52:08.868446453Z caller=http.go:78 service=http/server component=store msg="internal server shutdown" err="bucket store initial sync: sync block: MetaFetcher: iter bucket: Access Denied"
level=info ts=2020-02-13T12:52:08.868478867Z caller=prober.go:137 msg="changing probe status" status=not-healthy reason="bucket store initial sync: sync block: MetaFetcher: iter bucket: Access Denied"
level=warn ts=2020-02-13T12:52:08.86849345Z caller=prober.go:117 msg="changing probe status" status=not-ready reason="bucket store initial sync: sync block: MetaFetcher: iter bucket: Access Denied"
level=info ts=2020-02-13T12:52:08.8685051Z caller=grpc.go:98 service=gRPC/server component=store msg="listening for StoreAPI gRPC" address=0.0.0.0:10901
level=info ts=2020-02-13T12:52:08.868521799Z caller=grpc.go:117 service=gRPC/server component=store msg="gracefully stopping internal server"
level=info ts=2020-02-13T12:52:08.868604202Z caller=grpc.go:129 service=gRPC/server component=store msg="internal server shutdown" err="bucket store initial sync: sync block: MetaFetcher: iter bucket: Access Denied"
level=error ts=2020-02-13T12:52:08.86863483Z caller=main.go:194 msg="running command failed" err="bucket store initial sync: sync block: MetaFetcher: iter bucket: Access Denied"

I am using the same bucket before when the thanos-store docker image improbable/thanos:v0.3.2 and there were no this access denied error but the initial syncing was got stuck and eventually the pod got OOM killed. 😦

@caarlos0 Hi, this feature is not included in v0.10.1 release. You can use the latest master branch docker image to try it.

docker pull quay.io/thanos/thanos:master-2020-01-25-cf4e4500

FYI: This issue was closed as the major rewrite happened on master above 0.10.0. It’s still experimental but you can enable it via https://github.com/thanos-io/thanos/blob/master/cmd/thanos/store.go#L78 (--experimental.enable-index-header).

We are still working on various benchmarks especially around query resource usage, but functionally it should work! (:

Please try it our on dev/testing/staging environments and give us feedback! ❤️