zinit: [bug]: Constant CPU usage

What happened?

Hello, I noticed with even just a plain zinit .zshrc, around 0.7% per shell is used; this increases with more plugins.

Steps to reproduce

  1. .zshrc with contents:
ZINIT_HOME="${XDG_DATA_HOME:-${HOME}/.local/share}/zinit/zinit.git"
source "${ZINIT_HOME}/zinit.zsh"

Relevant output

No response

Screenshots and recordings

Zinit with no plugins: swappy-20220819_070853

Zinit with plugins: swappy-20220819_081746

zsh -df: swappy-20220819_071332

Operating System & Version

OS: linux-gnu | Vendor: unknown | Machine: x86_64 | CPU: x86_64 | Processor: unknown | Hardware: x86_64

Zsh version

zsh 5.9 (x86_64-unknown-linux-gnu)

Terminal emulator

alacritty

If using WSL on Windows, which version of WSL

No response

Additional context

No response

Code of Conduct

  • I agree to follow this project’s Code of Conduct

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Comments: 19 (12 by maintainers)

Most upvoted comments

@reportaman Noted. Thanks for the information!

I’ve finally got some time to dig into this tonight and over Labor day weekend.


To answer your questions:

I started a new Zsh instance with debugging via:

exec zsh --interactive --login --verbose --xtrace

and saw this output over and over when idle.

There is another tool I like for debugging, but will be cumbersome/frustrating to use if you aren’t super familiar with using a debugger. That said, observability is useful even if you’re just debugging your personal ZSH configuration.

https://zshdb.readthedocs.io/en/latest/

I’ve hunted down this code that I will investigate. Screenshot 2022-08-19 at 18 31 25

It seems to be constantly running in the background

I can reproduce this on Linux with the current main branch at 68a6b42caf224b2ca2c172d58daf9faf5c86beb9. The culprit is the chpwd hook set for @zinit-scheduler, which ends up adding a new sched chain every time the directory is changed (this was added in e28cab88c94232350d46bc1d6b52cd43830e24b6):

eric@xrb /tmp/test % sched
  1 Tue Aug 15 20:16:12 ZINIT[lro-data]="$_:$?:${options[printexitvalue]}"; @zinit-scheduler following "${ZINIT[lro-data]%:*:*}"

eric@xrb /tmp/test % cd test

eric@xrb /tmp/test/test % sched
  1 Tue Aug 15 20:16:16 ZINIT[lro-data]="$_:$?:${options[printexitvalue]}"; @zinit-scheduler following "${ZINIT[lro-data]%:*:*}"
  2 Tue Aug 15 20:16:16 ZINIT[lro-data]="$_:$?:${options[printexitvalue]}"; @zinit-scheduler following "${ZINIT[lro-data]%:*:*}"

eric@xrb /tmp/test/test % cd ..

eric@xrb /tmp/test % sched
  1 Tue Aug 15 20:16:19 ZINIT[lro-data]="$_:$?:${options[printexitvalue]}"; @zinit-scheduler following "${ZINIT[lro-data]%:*:*}"
  2 Tue Aug 15 20:16:19 ZINIT[lro-data]="$_:$?:${options[printexitvalue]}"; @zinit-scheduler following "${ZINIT[lro-data]%:*:*}"
  3 Tue Aug 15 20:16:19 ZINIT[lro-data]="$_:$?:${options[printexitvalue]}"; @zinit-scheduler following "${ZINIT[lro-data]%:*:*}"

https://github.com/zdharma-continuum/zinit/blob/68a6b42caf224b2ca2c172d58daf9faf5c86beb9/zinit.zsh#L2486-L2498

I think a general fix would be to only add a sched entry if there’s not already one present in $zsh_scheduled_events (both here and at the top of the function). I’m not sure how the bug referenced on line 2495 would interact with that, though.

As a side note, I used perf stat -I 1000 --interval-clear -p <zsh pid> to look at detailed CPU usage on Linux, and got similar results to @poetaman.