LocalAI: Nothing or 404 on port 8080,.. after seemingly working start

LocalAI version: 2.11.0

Environment, CPU architecture, OS, and Version: Windows 11 latest, Xeon® w5-3435X, 256GB, 2x 20GB RTX 4000 NVIDIA-SMI 550.65 Driver Version: 551.86 CUDA Version: 12.4, Docker Desktop via WSL latest

Describe the bug docker run -p 8088:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-nvidia-cuda-12 netstat -a -n -o | findstr “8080” #or 8088 not used before start, then 8080 Docker & WSL Looks a lot like (https://github.com/mudler/LocalAI/issues/720) After startup says http://127.0.0.1:8080, but 8080 or 8088 or 0.0.0.0:… return a 404,…

To Reproduce see descr

Expected behavior What would one expect?

Logs Will try --debugorDEBUG=true` tomorrow; tried binary on Ubuntu 22.04 today but didn’t get very far

Additional context

About this issue

  • Original URL
  • State: open
  • Created 3 months ago
  • Comments: 18 (1 by maintainers)

Most upvoted comments

And here is the Copilot-adapted PS variant:

If you’re trying to run the curl command in PowerShell, you should use the following syntax:

curl http://localhost:8080/embeddings -Method POST -ContentType "application/json" -Body '{ "input": "Your text string goes here", "model": "text-embedding-ada-002" }'

In PowerShell, the -X option for curl is replaced with -Method, -H is replaced with -ContentType, and -d is replaced with -Body.

Made my day.