morphdom: Walking through the DOM is slow

You state that walking through the DOM isn’t slow, but in reality it really is. Accessing things like firstNode or childNodes is vastly slower than storing a light-weight representation of it (aka virtual DOM).

Furthermore, handling of keyed nodes becomes problematic when using DOM nodes. There simply isn’t any performant way of doing this when using the DOM as the source-of-truth.

Have you see this too? http://vdom-benchmark.github.io/vdom-benchmark/

Furthermore, you shouldn’t really be comparing yourself to the virtual-dom library and stating that library as the de-facto virtual DOM standard, it’s one of the slowest implementations of virtual DOM out there in Safari (it’s noticeably faster than morphdom in Chrome according to your own benchmark).

About this issue

  • Original URL
  • State: closed
  • Created 9 years ago
  • Reactions: 1
  • Comments: 20 (5 by maintainers)

Most upvoted comments

For context, this library is at the core of Phoenix Live View (LiveWire for Laravel). Total diff is much smaller than equivalent SPA apps since users server side render their page and sprinkle in small reactive parts into page. This makes this library a perfect candidate since v-dom only wins once I/O is swamped (due to large changes). Otherwise real DOM is better/on par with v-dom.

https://github.com/phoenixframework/phoenix_live_view

Not much changed. Real DOM complexity slowness is inherent in its design, so it cannot be really faster. trueadm now works on React. But React raw speed in 5 years improved only very slightly, it is not goal really. Worst IE you need to support is IE11, much better than IE8 or IE9, but still crap.

@patrick-steele-idem true, but typically lists of data (the most common use-case for any useful tempting engine really) depends on the data model being sorted. If that data model changes regularly and its keys (or sorted items) change often, then you have a difficult problem to solve. Like I said in my email, this a problem that is far bigger than a GitHub issue and it involves many competent minds to fix, thus why there’s a Skype group setup that I’d love you to join (and others!).

It might not be a big use-case for eBay, but why limit our implementations to simply what our companies desire, why not make our implementation scale to what other companies might desire too?

@kesne this topic was in regards primarily to where the readme states that “No, the DOM data structure is not slow.”. I completely disagree with this statement. The DOM data structure is very slow, regardless of how and when you access it. I know this from my experiences building Inferno – which as as I stated above, is a very unique twist on the whole “virtual DOM vs DOM” problem.

I can also give a use-case where React wasn’t fast enough for us – and no other framework was either. I work in the world of financial trading applications, where a price being off by 250ms is a critical bug (it could cost banks millions if off). Thus it’s massively important for us to ensure our applications (on the web) update as fast as practically possible. When you have to update 250-500 elements on a page per 250ms, it becomes somewhat a burden for many of the implementations out there.

You might say this is an isolated problem for us, but it’s not at all. Almost every single trading application on the web has to either limit updates to 1s (which clients are unhappy about) or code their entire trading price layer in vanilla JS implementations. Given that trillions of dollars are passed through trading applications everyday, this is a very much a real-world problem that countless banks and tech companies have tried to solve.

If the real reason for morphdom is that it’s designed to support legacy applications still using jQuery (which is understandable) then that is completely different from the pitch in the readme where it comes over as morphdom being the best in class for performance compared to virtual DOM.

You’re right in that (well we both hope) browsers will get better at the DOM abstraction and improve performance there. In regards to the micro-benchmark talk, I don’t believe that’s entirely the case – the fact of the matter (which no one can even debate) is that creating and accessing literal objects is the most efficient way to handle anything in JavaScript. React was a poor implementation of that because it introduced large prototype chains and never really considered the realism of real world applications in terms of static vs dynamic.

The problem is, unfortunately, is you’re replicating the actions of a virtual DOM but using the DOM as the source-of-truth. This isn’t needed and isn’t ever the case in terms of real applications. Take my experience and environment (I write applications for the financial trading market) where around 60% of the DOM is static – this is actually very low too. Although take the dbmonster benchmark, you’ll see that only 35% is static; yet still if you conceptualise the “application” rather than nodes in the DOM, you’ll notice that most is broken up in terms of “fragments”. Take this concept (as my Inferno library does) and you’ll see vast performance benefits over anything out there (by a huge margin too).

I feel too many developers are jumping out there with one agenda simply to attract a certain crowd. This is not only damaging our scene, but it’s not even remotely productive. There’s value in mixing both virtual DOM (ignore the coined phrase, there’s much more to it at low level) and the actual DOM itself; let’s not distance ourselves from our true objective and that is to make web applications faster.