iter-tools: Async iteration is slow

One of my goals for this library is to create APIs that will be suitable for working with streaming text, e.g. streams of characters read from disk on demand. The problem is that the async iteration API is relatively slow because it involves Promises which only resolve at the end of an event loop. Thus there is one event loop of overhead per character. Ouch.

The problem is the AsyncIterator API itself. The spec says an async iterator has a next() method which returns Promise<done, value>. It also specifies the execution order for promises, which is to say that something like setImmediate is needed before the promise can resolve.

If the problem is at this level, then I think so too is the solution. I think the solution is a new kind of iterator: an “asyncish” iterator which returns EITHER {done, value} or Promise<{done, value}>. It should be possible to reflectively distinguish between these two kinds of return from next(). It would also be necessary to define a new symbol, perhaps Symbol.for('asyncishIterator') to return objects conforming to that interface.

From there there’s lots that could be done. A good consumer of asyncish iterators would be able to fall back to async-only iterators if only Symbol.asyncIterator were defined. A good producer of asyncish iterators would also define Symbol.asyncIterator to return an async-only iterator in case the consumer could not support asyncish iteration – it would be relatively trivial to create an iterator wrapper which forced {done, value} into a Promise if it was not in one already.

There would be considerations around error handling, but I think none insurmountable.

Then finally the question would be how to work with such iterators. There I think coroutines would be useful. The babel transform that converts for await of loops into generators would be the ideal place to start, as it works in the desired way. The generator code could be modified to do such a conversion when generating the async versions of our methods.

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Comments: 35 (33 by maintainers)

Most upvoted comments

IT WORKS

Actually though, even better, I can use the macrome-2 branch to finish integrating with the current version of macrome and integrate the perf fixes, then I can do my special branch dance that only works when each commit is a package to have both branches bootstrap each other until by the time the dance is done I have a solution that is ready for the world. OK, let’s do that.