dask: Optimization is slow

Ref: https://github.com/dask-contrib/dask-awkward/issues/138

The referenced issue provides a way to make a dask HLG having 746 layers. This size, and much bigger, look like they might be common with dask-awkward and high-energy physics workloads - but are far from typical in normal array and dataframe usage.

When wishing to compute the result, optimizaion happens, including dask.blockwise.optimize_blockwise to merge layers where possible. This takes 2.1s for this simple example, by far dominating the compute time. Looking at the profile, all the time is in rewrite_blockwise, which calls subs() 7.5M times. Given that this is working on layers, not tasks, I am surprised. The code of rewrite_blockwise appears to be N**2, and I cannot understand yet how it works. (in the example in the linked issue, there is only one partition, so everything goes much much faster by immediately falling back to a low-level graph)

Any thoughts?

  • Dask version: main

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Comments: 29 (28 by maintainers)

Most upvoted comments

Yes! Just figured that out and have adjusted my claims above.

Still, it’s noticeably slow at 2k layers.