ClickHouse: Memory limit (total) exceeded issues with wait_end_of_query=1

Using the version a2ed8267add1cfc28fa2ca17f7227a816921e7d7 in case I run a QUERY like “SELECT 1” in loop using wait_end_of_query=1 after 7000 iterations all queries will fail with:

DynamicQueryHandler: Code: 241, e.displayText() = DB::Exception: Memory limit (total) exceeded: would use 14.40 GiB (attempt to allocate chunk of 4198133 bytes), maximum: 14.40 GiB, Stack trace (when copying this message, always include the lines below):

If wait_end_of_query is not specified the problem doesn’t occur.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 23 (5 by maintainers)

Commits related to this issue

Most upvoted comments

I fund out that also If I do 10K small inserts one after the other without any wait_end_of_query receive same issue.

Yes, this is kind of expecting behaviour

There is any way I can disable this memory limit check ?

You can add the following into config.xml:

<max_server_memory_usage_to_ram_ratio>100</max_server_memory_usage_to_ram_ratio>

To verify take a look into the logs:

Setting max_server_memory_usage ...

Hm, and this not happens before?

And yes it pops up only now, since max_server_memory_usage has been added only in 20.4 (#10421, #10362)

P.S. this memory tracking accounting will be reset each 90 seconds (AFAIR), but due to this queries are too fast (since they are no-op) this does not work