DearPyGui: run_async_function is prohibitively slow
DearPyGui Version: 0.6.42 Operating System: Windows 10
run_async_function() can be extremely slow depending on what you ask it to do. This can make DPG unusable for computation intensive applications, since, for example, loading data using a DPG gui leaves you either locking up the gui completely (by not using run_async) or waiting forever for the data to load (when using run_async) where a script would load it much quicker.
Here is a minimal example where a call to print grinds run_async_function to a halt:
from dearpygui.core import *
from dearpygui.simple import *
import time
def slow_callback(sender, data):
c = 0
for i in range(0, 1024):
print(i)
c += i
t1 = time.time()
slow_callback(None, None)
t2 = time.time()
print('main thread time:', t2-t1)
set_threadpool_high_performance()
t1 = time.time()
run_async_function(slow_callback, None, return_handler=lambda s, d: print('dpg thread time:', time.time() - t1, flush=True))
start_dearpygui()
output:
main thread time: 0.007995367050170898
dpg thread time: 16.691800117492676
This also occurs for me when manipulating numpy arrays. I haven’t tried lists and other pythonic statements yet (related issues: #349 #370 )
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 29 (20 by maintainers)
FYI
https://link.medium.com/kA0haB7Xgeb
Sent from my iPhone
@hoffstadt can I confirm that’s safe to call pretty much all DPG commands from within a Python thread? Thanks for your work on this great library.
Edit: also, it would be nice if using threading was mentioned in the docs.
Went ahead and added the new threading system to 0.6.126 which is deploying now. Once its deployed I will tag the release. Please read the changelog at that time.
FYI, I should to be able to deploy a release tomorrow with the add_series and delete series commands available with async support to help you out until we get the new threading system in.
Yes, the high usage has been a known thing. If you have a function that doesn’t involve DPG data or GUI calls, then calling it with threading.Thread or multiprocessing.Process is a much better way of achieving parallelism.
@rjonnal Jah-on is correct, however I’m thinking it will be closer to 4-6 weeks. And we really appreciate the kind words.
The good news is, we are handling the python threading issue first and we regularly deploy to test pypi for testing. As soon as we finish that part of 0.7, I will send you a message personally and so you can start using it before 0.7 is officially released.
Are you on our discord channel by change?
@hoffstadt this is great news! First, dearpygui is incredible and thank you for your effort. In scientific research we often need quick UIs for interaction with data, and even without python threads this is usually fine. But for real time instrumentation we need, at minimum, consumer/producer threads, i.e. dearpygui and the instrument, respectively. I’ve been trying to pack the whole instrument loop into a single call to run_async, but ran into problems. I also tried someone’s (yours, maybe) recommendation elsewhere to use setup_dearpygui(), a while loop containing render_dearpygui_frame(), and cleanup_dearpygui(), but if the instrumentation block in the loop takes more than 30 ms (which it often does) then we fall below the recommended dearpygui frame rate.
Any idea when 0.7 will be out?
Thank you, appreciate it.
@dkluis We can write a quick doc on replacing run_async in your code once we get to that release. I believe Python threading is actually easier to use.