rambda: Many benchmarks are wrong and do not reflect reality

First of all thank you for the great library! I used it in many projects.

But rambda is not so fast as you can think. Many popular functions are much slower than ramda and lodash. The reason of the misconception is in wrong benchmarks.

Lets start from theory. Most functions for processing data consist of two stages: initialization and processing. In initialization part fn checks data type, arity, makes some preinitializaton etc. In main part fn processing data. Performance of both is important and they should be as fast as possible.

If you pass a lot of data to function than processing stage becomes most important. But initialization stage is important too for use case when you are calling function very often. For example filtering small inner arrays of large list:

map(filter(), [[1,2,3], [4,5,6], ... hundreds of thousands inner arrays ...])

Filter will be called many time with small data set.

So to test performance of both stages we need to have two tests: in first we pass a lot of data (testing processing stage) and in second we pass empty data set to execute all initialization checks but skip processing (testing init stage). Rambda in benchmarks uses only variant closest to init testing (several items passed only).

And some another important things:

  1. All executing branches should be covered for worst cases (or at least for the most commonly used). For example condition for filter should always return true to fully test performance of adding items to new array/object.
  2. User code passed to function must be as fast as possible. To correctly testing speed of fn not user code. Ideally something that VM can optimize to const like noop function or function always returns true/false.

Now I will show some proof of wrong benchmarks:

  1. find
  • empty array (init stage testing)
  Rambda:
    210 927 688 ops/s, ±0.77%   | fastest
  Ramda:
    3 244 120 ops/s, ±1.60%     | slowest, 98.46% slower
  Lodash:
    21 610 387 ops/s, ±1.27%    | 89.75% slower

Yahoo! Fastest. Because no init (only arity check). Lets test with more data…

  • 10k items with const false predicate (processing stage testing)
  Rambda:
    12 524 ops/s, ±1.54%    | slowest, 93.88% slower
  Ramda:
    204 510 ops/s, ±1.18%   | fastest
  Lodash:
    144 455 ops/s, ±0.82%   | 29.37% slower

But wait…

2020-05-30-221358_521x55_scrot

  1. reduce
  • empty array
  Rambda:
    9 534 810 ops/s, ±2.80%     | 94.44% slower
  Ramda:
    5 777 626 ops/s, ±1.78%     | slowest, 96.63% slower
  Lodash:
    171 437 207 ops/s, ±0.96%   | fastest

Ok, init much slower than for lodash and the same as ramda. What about processing performance?

  • 10k items with noop reducer
  Rambda:
    9 634 ops/s, ±3.09%     | slowest, 95.4% slower
  Ramda:
    209 617 ops/s, ±1.07%   | fastest
  Lodash:
    12 152 ops/s, ±1.12%    | 94.2% slower

And then:

2020-05-30-222623_529x52_scrot In reality ramda is fastest and ramda is 95% slower.

I can help with making rambda fastest in the west (saw many places in code where optimization can be done). But we need to add several benchs for functions.

As starting points: #472 #473 #474. No generated files, only source changes.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 1
  • Comments: 15 (14 by maintainers)

Most upvoted comments

And it would be nice to add testing built-in functions like arr.find(), arr.filter() to comparation. It will be very demonstrative.

find() in 10k items:

  Rambda:
    201 907 ops/s, ±0.85%   | fastest
  Native:
    30 921 ops/s, ±29.24%    | slowest, 84.69% slower
  Ramda:
    170 486 ops/s, ±1.01%   | 15.56% slower
  Lodash:
    131 708 ops/s, ±0.80%   | 34.77% slower

@farwayer I just approved all of your PRs, but I think that before release of new version, we’ll have to improve on the benchmarks, so it will be more visible that all of your changes have speed improvement. Do you plan open such PR. if not I’ll need some example code which I can use throughout the other benchmarks.

Otherwise, a big thank you as these changes are important.