hpx: The worst-case time complexity of parallel::sort seems to be O(N^2).

Currently, the worst-case time complexity of parallel::sort seems to be O(N^2).

https://github.com/STEllAR-GROUP/hpx/blob/d55dc1b220b6ba78a715a38779180f995b632ae2/hpx/parallel/algorithms/sort.hpp#L80-L89 https://github.com/STEllAR-GROUP/hpx/blob/d55dc1b220b6ba78a715a38779180f995b632ae2/hpx/parallel/algorithms/sort.hpp#L95-L97 Surely, due to the two code blocks above, the possibility is very rare. But, theoretically it still seems to be able to be O(N^2) in the worst-case. Normally, for ensuring O(NlogN), switch algorithm to heap sort when recursion depth is higher than specific threshold. If we want to ensure O(NlogN), we should modify Line 80 to

if (std::size_t(N) <= sort_limit_per_task || depth > recursion_depth_limit) 

But, I’m afraid of performance change from those modifications.

I want to hear your thoughts about this.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 26 (13 by maintainers)

Most upvoted comments

I am a few days out of Madrid. The Next week I will prepare and send you the corrections