cubeb: WASAPI backend doesn't obey latency parameter
I believe this a limitation of WASAPI shared mode: no matter the requested buffer size, callback events still come in at the default device period.
For example, on my computer (Windows 10, Cirrus Logic CS4208 speakers set to 24-bit 48000Hz), the default device period is 10 milliseconds / 480 frames. In test_audio
, which requests a latency of 4096 frames, the first callback gets to fill the entire buffer (4096 frames), but each additional callback gets a number of frames equivalent to the rate-adjusted device period.
One strategy that seems to work is to ignore WASAPI refill events (e.g. don’t call IAudioRenderClient::GetBuffer
) until the desired number of frames is available to be written to. Of course, since the buffer size is equal to the desired number of frames, this causes audio glitches since the audio engine runs out of data. The default buffer size seems to be twice the period plus some margin, so I tried a buffer that’s a little more than twice the requested latency, and that seemed to fix the glitches.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 58 (9 by maintainers)
I have a release .exe and I am using ALOGV() to log my values asynchronously. They aren’t much different from my debug values, the ones I have been reporting. At low latency I’m seeing an average of 18.2ms round-trip latency. I’ll run it various times over the weekend and accumulate results in a table. I have minimized the logging to one number per test, elapsed time, aka latency.
The main thing is that now others can build this version of test_tone.exe and generate their own logs on different hardware. My code is here, only two files are changed: cubeb_wasapi.cpp and test_tone.cpp. It creates a log file named w_latency_log.txt in the same folder as test_tone.exe. The exe overwrites the log file each time it runs. The exe runs for just over 60 seconds to accumulate 60 latency values. If you like I can post an exe here or elsewhere.
This code always selects low latency if it is available. It still chooses the period based on the minimum period of the default output device. I have been using the same built-in hardware for input and output, as it is the only hardware that is currently supported by Windows for low latency.
Is there a way to stop the logging from including .cpp file line numbers? It makes it harder to parse the results for analysis.