solid: Duplicate edges in signal graph
If I understand correctly, when signal value is accessed, it doesn’t perform a check when the edge already exists:
- Accessing
current()value: https://github.com/ryansolid/solid/blob/8fe0c11360387a325d64e040a756585a69f6da63/src/signals.ts#L189-L192 - Creating an edge: https://github.com/ryansolid/solid/blob/8fe0c11360387a325d64e040a756585a69f6da63/src/signals.ts#L405-L439
Without such checks, use cases that involve sorting/filtering in data tables will create insanely huge graphs.
For example, simple sort() on 5000 items will create ~110000 edges:
let i = 0;
class Item {
_a = Math.random();
get a() {
i++;
return this._a;
}
}
const items = [];
for (let i = 0; i < 5000; i++) {
items.push(new Item());
}
items.sort((a, b) => a.a - b.a);
console.log(i);
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 1
- Comments: 32 (6 by maintainers)
Thanks, everyone for this discussion. It actually ended up productive in the end. Did simple last edge detection here. And will look at more sophisticated methods in the future.
@luwes I think that it is possible to get rid of all duplicates from observables to computeds at the cost of one additional property on observables, also it can improve reset performance (in most cases it won’t be necessary to remove edges from observables to computed).
nextId++and store it in the execution context(on stack) of this computed.lastAccessedByto perform a simple check for duplicates.lastAccesedByto prevent edge cases when nested computeds could overwrite it.lastAccessedBy !== computed execution id. When observable is removed, assignlastAccesedByto-1, if it still should remain in the list, assign-2.-2inlastAccessedBy.Just an idea, maybe I’ve missed something, maybe it isn’t worth it 😃
EDIT: Also, it is possible to detect when there aren’t any nested computeds by checking that
nextIdisn’t greater than current execution id + 1. And when there aren’t any nesteds, we can skip step3.EDIT2:
-1I don’t usually find it productive to respond to Boris, but for Ryan’s benefit and anyone else interested, I benchmarked this design decision pretty thoroughly back when I made it. De-duping edges during creation requires a hashtable or Set, and at the speeds we want, those are both slow. Or at least they were when I was doing the benchmarking. At the time, a basic read ran about 300% slower. The relevant metric isn’t number of edges but number of reads. It was considerably faster overall to create a duplicate edge than to detect whether it was a duplicate. That was still true even if I accounted both for the time to traverse the extra edges during update and for the extra GC time of creating/freeing the larger arrays of dependencies.
In fact, this, plus the fact that the S data structures were designed not to need try/finally statements in the update loop, were where S got most of its speed over other reactive libraries.
@leeoniya it was a while ago that I looked at cellx. At the time, the try/finally statements in its core loops gave S a considerable advantage. That was before turbofan made try/finally optimizable, though, so the picture may have changed since then.