hyper: Slow for big logs
Problem
I am using hyperterm together with SSH and tmux in version 0.7.1 (0.7.1.36) . When using cat or docker logs <xyz>, hypterterm is extremely slow and then completely hangs. Hypterterm is then unusable.
As I can still attach to the existing tmux session with OSX’s terminal or iterm2, I know for sure that the tmux session is okay. Hypterterm seems to have problems rendering large output files.
Can anyone else reproduce this or is this just me having this issue?
Some data: Hyperterm version: 0.7.1 (0.7.1.36) OSX version: 10.11.2 (15C50)
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 52
- Comments: 48 (16 by maintainers)
I did a little test and ran both Hyper and Terminal.app with long outputs, mostly to test if it’s the number of lines, rows, total characters that slow Hyper down, as welll as test different characters (i thought it may be parsing the output for highlighting or similar).
Results seem to indicate that it’s mostly a function of total # of characters. But I’m pretty sure there’s something fundamentally wrong, as i now have a Helper process using 15GB of RAM.
The times are surprisingly close to a linear relationship. Digging through a profile with Instruments didn’t get me to anything specific, but it appeared as if there was an awful lot of of notification subscribing- and calling going on. I bet a strategically placed debounce would improve performance dramatically.
Also: Emojis count for 10 characters.
Tagging #94, #555, #881 and #1169 as probably identical issues.
Performance will be improved by replacing
htermbyxterm.js. But maybe not as much as expected for this type of extreme benchmarking. I tried @MatthiasWinkelmann example with a zero added (200M of chars!) on our WIPxtermbranch:ruby -e 'print ("." * 2000 + "\n") * 100000 ';It took 48s to show all lines versus 8s for native Terminal app.But it is robust. Hyper didn’t hang up like current release.
@Marahin I understand your concerns but
cating over 1M of chars in terminal is generally a user’s mistake. In this case, it should be robust, not necessarily performant.Imo, a real life example is:
cat /etc/services(almost 14k lines):xtermbranch: 154msTotally acceptable and promising 😍
Thank you for your patience ❤️ Be sure that we are taking this issue seriously.
This is the worst. Like, if you’re working as a programmer or anyone where you sometimes have to access the logs, it’s just unusable.
cat log/development.logand you’re done, you have to take another 2 minutes to get everything going as it was before. 👎+1, no issue with version 1.2.1. Version 1.3.0 and 1.3.1 keep freezing on Mac OS 10.12.3
Has there been any progress on this? I’m encountering this issue frequently and it’s rather disruptive to my workflow. Looks like there hasn’t been any motion in the xterm branch in about 2 months.
I’ve investigated this a bit further. Doing a
cat bigfile.manylines.txtcreates a stream of about 1,000 actions per second:I’m pretty sure these should be buffered to <= 60FPS as early as possible, but definitely before they hit the renderer. The following is a CPU profile captured with the developer tools.
I tried to implement some sort of debouncing in https://github.com/zeit/hyper/blob/master/app/session.js#L56 but unfortunately couldn’t get it to work reliably.
Another possibly easy performance win may be to switch the latter two terms in https://github.com/zeit/hyper/blob/1b6d925524f30148ead6c46326a0d47964d120b5/lib/hterm.js#L158: The
runes(text)takes 15 to 200 times as much time as the the regex incontainsNonLatinCodepoints(text)in a short test I did (35650ms vs. 150ms for one very long line, 45ms vs. 3ms for 10000 iterations on a 20-char string). As can be seen in the CPU profile it represents about half of the CPU time of echoing text to the terminal.In a CPU profile I created with the MacOS Activity Monitor, I also saw a lot of activity related to memory allocation/garbage collecting, which is possibly caused by
runesas well since it creates an array on each call. But I’m unsure if that’s already included in the Chrome profile.Probable duplicates of this are #474 #1040, #1044, #571, #574 , #1237, #1221 as well the ones tagged two messages up, and possibly #1157 cc @dotcypress
Thank you for the information! I wasn’t up to speed on the
canarybranch. Just updated and ran a quick test, and (so far) the issue seems to be resolved. Appreciate your help.For me the performance on Hyper 1.3.0 is worse than it was on 1.2.1
It froze again completely on a large input. This made me step away from Hyper again because I can not use it any more. Hope this can be fixed.
I am using Ubuntu 16.10 and Hyper 1.3.0
Same thing happens without tmux, just by displaying a file with very long content via cat. I noticed that memory usage goes towards my machine’s maximum when the heavy slowdowns start (so that’s probably why).
Related issue https://github.com/zeit/hyperterm/issues/571
I do NOT blame anybody about anything.
I really understand this whole issue about current release and this is why I wrote my comment. Current performance (due to Chromium’s
hterm) is so bad that we are currently moving toxterm.js: and this is really promising.I have performance in mind for real world use case and robustness for edge case (what I clumsily called
user's mistake).I’m very sorry to ask this @chabou, but are you out of your mind? A little bit over 4 (literally: FOUR) seconds compared to Terminal’s 88 milisecond? This is 45x slower than the default terminal. Do you really blame user for whining about that? 😮
xtermbranch seems way better and closer to what you would expect in 2017 with modern computers, but I belive that the whole issue was about [then] current version (which I suppose was NOT the xterm one).Awesome work @MatthiasWinkelmann. I was just discussing with @nw that another performance win will be to bypass the Redux reducer (which creates a temporary
Writeobject).Instead, we can emit as part of a side effect, and then subscribe from the
Termdirectly (pub/sub) styleYou can find instructions here: https://zeit.co/blog/canary#hyper
@hharnisc as another iteration of this, it’d be really cool to merge the actions into one object, and merge the payload ?
I tried buffering in a couple different points in the write pipeline. But this iteration has been the most successful.
It’s super simple, if the function isn’t rate limited call it immediately otherwise queue up the function.
I tested this with
cat largefile.txt, and it didn’t lock up the ui for me (current release did). It takes a bit for the text to stream out when buffering so I’m not sure if this is good enough. Still more that could be done in terms of optimization.@MatthiasWinkelmann would you be up for trying this again with this fork/branch?
https://github.com/hharnisc/hyper/tree/buffer-session-data
Thanks a lot @MatthiasWinkelmann. This will certainly allow us to improve. CCing @dotcypress
For macOS, looking at Terminal and iTerm, there is a notion of a
scrollback bufferwhere you can limit the number of lines to which you can scroll back to view. Performance would likely increase if it only had to render the most-recent n lines rather than all of the lines.@danielkalen I’ve just tested it with the current canary version, and the situation seems much improved.
The test above, with 10000x1000x🍷, which previously took 137 seconds is now done in 15 seconds, a 10-fold increase in performance.
iTerm does the same in 25 seconds, whereas Terminal.app is done almost instantly. So there’s obvious room for improvement, but it’s far less likely to actually interfere with work now.
iTerm does, however, better handle interruptions (control+c): it quits instantly, while Hyper finishes the output before handling the interruption. But that’s tracked in #555, #1121, and #1484
Hi guys, this issue is still present in
1.4.8@insanityfarm
xtermhas been merged into ourv2branch. And this branch was renamedcanary. You don’t have to build Hyper from source anymore, to usexterm. You only have to setcanaryin your config as update channel.@Marahin: In yesterday’s intro to the live coding on twitch @rauchg said they we’re going to tackle this issue in the session and that he already knew what needs to be done to fix it. I didn’t follow the whole session so unsure if they actually managed to fix it.
Someone, please. This provides a terrible experience to anyone who tries out Hyper and finds it unable to
cata file because of a freeze. 😢It’s been here for months with no fix.