undici: "bad" Pool scheduling performance under low pressure

In https://github.com/nodejs/undici/pull/466 I’ve made the benchmarks parametric. I also added the ability to run the server with a timeout, which in this case is set top 10ms.

The following cases are not saturating benchmarks, e.g. both the client and server have plenty. of spare CPU capacity.

$ PORT=3000 CONNECTIONS=5 PARALLEL=10 PIPELINING=10 node benchmarks/index.js
http - no agent  x 85.57 ops/sec ±0.96% (43 runs sampled)
http - keepalive x 86.13 ops/sec ±0.74% (43 runs sampled)
http - keepalive - multiple sockets x 373 ops/sec ±1.33% (61 runs sampled)
undici - pipeline x 176 ops/sec ±0.91% (45 runs sampled)
undici - request x 180 ops/sec ±0.84% (80 runs sampled)
undici - pool - request - multiple sockets x 180 ops/sec ±0.70% (33 runs sampled)
undici - stream x 182 ops/sec ±0.72% (33 runs sampled)
undici - dispatch x 182 ops/sec ±0.67% (81 runs sampled)
undici - noop x 186 ops/sec ±0.46% (82 runs sampled)

As you can see, the clients are not saturated, but the throughput is essentially the one of 1 socket. This is essentially a low pressure scenario because PARALLEL < CONNECTIONS * PIPELINING.

If we remove pipelining, we can still see some improvement.

$ PORT=3000 CONNECTIONS=5 PARALLEL=10 PIPELINING=1 node benchmarks/index.js
http - no agent  x 83.88 ops/sec ±0.86% (42 runs sampled)
http - keepalive x 85.33 ops/sec ±0.70% (43 runs sampled)
http - keepalive - multiple sockets x 377 ops/sec ±1.43% (59 runs sampled)
undici - pipeline x 88.54 ops/sec ±1.16% (45 runs sampled)
undici - request x 89.27 ops/sec ±0.63% (45 runs sampled)
undici - pool - request - multiple sockets x 403 ops/sec ±0.98% (62 runs sampled)
undici - stream x 91.53 ops/sec ±0.57% (46 runs sampled)
undici - dispatch x 90.92 ops/sec ±0.39% (45 runs sampled)
undici - noop x 92.51 ops/sec ±0.56% (46 runs sampled

If we increase load so that match PARALLEL = CONNECTIONS * PIPELINING:

$ PORT=3000 CONNECTIONS=5 PARALLEL=50 PIPELINING=10 node benchmarks/index.js
http - no agent  x 85.65 ops/sec ±1.60% (13 runs sampled)
http - keepalive x 85.78 ops/sec ±1.22% (13 runs sampled)
http - keepalive - multiple sockets x 411 ops/sec ±0.94% (42 runs sampled)
undici - pipeline x 480 ops/sec ±1.09% (48 runs sampled)
undici - request x 501 ops/sec ±0.64% (49 runs sampled)
undici - pool - request - multiple sockets x 831 ops/sec ±1.25% (74 runs sampled)
undici - stream x 506 ops/sec ±0.76% (50 runs sampled)
undici - dispatch x 517 ops/sec ±0.66% (51 runs sampled)
undici - noop x 525 ops/sec ±0.46% (52 runs sampled)

Last case where PARALLEL > CONNECTIONS * PIPELINING:

$ PORT=3000 CONNECTIONS=5 PARALLEL=500 PIPELINING=10 node benchmarks/index.js
http - no agent  x 87.28 ops/sec ±1.28% (5 runs sampled)
http - keepalive x 88.06 ops/sec ±1.72% (5 runs sampled)
http - keepalive - multiple sockets x 434 ops/sec ±1.06% (9 runs sampled)
undici - pipeline x 829 ops/sec ±1.07% (13 runs sampled)
undici - request x 851 ops/sec ±0.54% (13 runs sampled)
undici - pool - request - multiple sockets x 2,998 ops/sec ±0.95% (32 runs sampled)
undici - stream x 853 ops/sec ±0.60% (13 runs sampled)
undici - dispatch x 858 ops/sec ±0.52% (13 runs sampled)
undici - noop x 859 ops/sec ±0.24% (13 runs sampled)

I think we should change the scheduling algorithm so that the case where PARALLEL < CONNECTIONS * PIPELINING is handled properly. Have you got any pointers @ronag?

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 17 (17 by maintainers)

Most upvoted comments

I suspect you are seeing an issue with the benchmark.