llama-gpt: Stuck on '[Host [llama-gpt-api-cuda-ggml:8000] not yet available...'
My system has an i5-8400 and a GTX 1660 Super, and I’m running using WSL2 && Windows 10. I’ve also ran into this issue running on an Intel mac as well. I’m getting the following message infinitely when running with either --with-cuda
or not:
I thought it may have something to do with my pihole instance managing DNS things, but switching to my normal router I still get this error for what seems like forever.
I tried changing the port to 8001, or changing the hostname to localhost directly but I get the same thing. I also verified that nothing is running on port 8000 on my PC or Mac.
To avoid permission issues I’ve been running sudo ./run.sh --model 7b --with-cuda
as well, so no errors about storing models or whatnot.
Thanks!
About this issue
- Original URL
- State: closed
- Created 10 months ago
- Reactions: 9
- Comments: 15
Same issue running on a ubuntu VM. with ./run.sh --model 7b
I got the same issue, but with ./run.sh --model 70b Running Windows 10 as well… but should it matter when its containerized? I did let it run all night but still get “not yet available”.
./run.sh --model code-13b is the only one that works for me so far.
With ./run.sh --model code-34b I get all the way to the web site, but the form is missing and I can’t chat.
I ran the same commands on 2 different computers, it worked fine on the first but failed on the 2nd with:
I checked the files in the
models/
folder and on 2nd computer, the file was much smaller and indeed I had a network failure around that time. Could be useful to check the downloaded models against some hash.did any of you guys managed to figure this out, i’m still unable to resolve this issue and i’m getting the same
So I did try what @BeachUnicorn tried, with running
./run.sh --model code-13b
and I did get to localhost. However I get an internal error when entering in a prompt.I noticed when running the 7b model my RAM continues to climb while that warning is there.
Appears to work as expected if you wait 😅