LocalAI: Problem with TTS in 2.8
We are using LocalAI in Docker but have Problems with all TTS models described in TTS in LocalAI .
But when calling the following curl:
curl http://localhost:8080/tts -H "Content-Type: application/json" -d '{ "backend": "bark", "input":"Hello!" }' | aplay
we get the following error:
stderr OSError: /opt/conda/envs/transformers/lib/python3.11/site-packages/torchaudio/lib/libtorchaudio.so: undefined symbol: _ZN2at4_ops10zeros_like4callERKNS_6TensorEN3c108optionalINS5_10ScalarTypeEEENS6_INS5_6LayoutEEENS6_INS5_6DeviceEEENS6_IbEENS6_INS5_12MemoryFormatEEE
This error is thrown with bark, qoqui and Vall-E-X. Piper works.
LocalAI version: v2.8.0-cublas-cuda12-ffmpeg
Environment, CPU architecture, OS, and Version: Linux aifb-bis-mlpc 5.15.0-92-generic #102-Ubuntu SMP Wed Jan 10 09:33:48 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
To Reproduce Run v2.8.0-cublas-cuda12-ffmpeg LocalAI on Server an the curl command.
Expected behavior LocalAI shouldn’t return an error but a tts file.
Logs I added a log file. _Shared_LocalAI_logs.txt
About this issue
- Original URL
- State: closed
- Created 5 months ago
- Comments: 17 (9 by maintainers)
Commits related to this issue
- fix(tts): fix regression when supplying backend from requests fixes #1707 Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> — committed to mudler/LocalAI by mudler 5 months ago
- fix(tts): fix regression when supplying backend from requests (#1713) fixes #1707 Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> — committed to mudler/LocalAI by mudler 5 months ago
We tested it with the master branch (master-cublas-cuda12-ffmpeg). As input we used the standard curl and we got the following error:
curl:
Please open separate tickets for it with full logs and how to reproduce it, thanks!
lol I’ve read that line at least four times before writing it looked legit
ouch, good catch, this is a regression introduced in https://github.com/mudler/LocalAI/pull/1692.
Sorry that it took me forever to realize that the images weren’t pushed, then an equal amount of time to build a docker image from your branch.
I just ran a quick test for vLLM and the model loaded successfully, so I’d say it’s fixed but maybe it’s better to wait a bit more and confirm with the images from the master branch.
Sorry, the error seemed too suspiciously similar, and I thought it might be the same origin.
Was coming back to report the same as @Jasonthefirst.
And I opened #1710 for vLLM.
@golgeek I’ve tried only with TTS models (vall-e-x specifically), can you confirm that? please open up another issue for vLLM