LocalAI: rpc error: code = Unknown desc = unimplemented

What went wrong? Settings?

quay.io/go-skynet/local-ai:master-cublas-cuda11

request:

{
    "model": "llama-7b-hf",
    "messages": [
        {
            "role": "user",
            "content": "Hello! What is your name?"
        }
    ]
}

response:

{
    "error": {
        "code": 500,
        "message": "rpc error: code = Unknown desc = unimplemented",
        "type": ""
    }
}

log:

@@@@@
Skipping rebuild
@@@@@
If you are experiencing issues with the pre-compiled builds, try setting REBUILD=true
If you are still experiencing issues with the build, try setting CMAKE_ARGS and disable the instructions set as needed:
CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF"
see the documentation at: https://localai.io/basics/build/index.html
Note: See also https://github.com/go-skynet/LocalAI/issues/288
@@@@@
CPU info:
model name      : Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
CPU:    AVX    found OK
CPU:    AVX2   found OK
CPU: no AVX512 found
@@@@@
ESC[90m5:38AMESC[0m ESC[33mDBGESC[0m no galleries to load
ESC[90m5:38AMESC[0m ESC[32mINFESC[0m Starting LocalAI using 4 threads, with models path: /llm-model-volume
ESC[90m5:38AMESC[0m ESC[32mINFESC[0m LocalAI version: 12fe093 (12fe0932c41246914e455c4175269a431fb8cf60)
ESC[90m5:38AMESC[0m ESC[33mDBGESC[0m Extracting backend assets files to /tmp/localai/backend_data

 ┌───────────────────────────────────────────────────┐ 
 │                   Fiber v2.48.0                   │ 
 │               http://127.0.0.1:8080               │ 
 │       (bound on host 0.0.0.0 and port 8080)       │ 
 │                                                   │ 
 │ Handlers ............ 32  Processes ........... 1 │ 
 │ Prefork ....... Disabled  PID ................ 14 │ 
 └───────────────────────────────────────────────────┘ 

6:21AM DBG Request received: 
6:21AM DBG Configuration read: &{PredictionOptions:{Model:llama-7b-hf Language: N:0 TopP:0.7 TopK:80 Temperature:0.9 Maxtokens:512 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0} Name: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:512 F16:false NUMA:false Threads:4 Debug:true Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions:} MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false TensorSplit: MainGPU: ImageGenerationAssets: PromptCachePath: PromptCacheAll:false PromptCacheRO:false Grammar: PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} SystemPrompt:}
6:21AM DBG Parameters: &{PredictionOptions:{Model:llama-7b-hf Language: N:0 TopP:0.7 TopK:80 Temperature:0.9 Maxtokens:512 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0} Name: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:512 F16:false NUMA:false Threads:4 Debug:true Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions:} MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false TensorSplit: MainGPU: ImageGenerationAssets: PromptCachePath: PromptCacheAll:false PromptCacheRO:false Grammar: PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} SystemPrompt:}
6:21AM DBG Prompt (before templating): Hello! What is your name?
6:21AM DBG Template failed loading: failed loading a template for llama-7b-hf
6:21AM DBG Prompt (after templating): Hello! What is your name?
6:21AM DBG Model already loaded in memory: llama-7b-hf
6:21AM DBG Model 'llama-7b-hf' already loaded
[172.27.128.150]:43283  500  -  POST     /v1/chat/completions


About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Reactions: 10
  • Comments: 25 (1 by maintainers)

Most upvoted comments

I have the same problem when running LocalAI in a Docker container. The logs contain numerous lines of the form:

rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33427: connect: connection refused"

with varying port numbers

adding to this, same issues here both local docker & EKS via AL2 amd64

I can get through to /v1/models ok, but can’t do anything with a model otherwise I get a timeout & various forms of:

rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp [::1]:33427: connect: connection refused"

If it can help, my (very similar) error message :

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-gpt4all-j",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9 
   }'

{"error":{"code":500,"message":"rpc error: code = Unknown desc = unimplemented","type":""}}

Could someone share what hardware/system configuration this does build and run successfully in?

i increased the memory limit to 64 G still same message. i am using the example from “getting started”.

when i uncommented REBUILD=true in .env file, i got the following error

curl: (56) Recv failure: Connection reset by peer

anything else i can try?

@rozek @nabbl @Mer0me I had precisely the same error message as you had, so our problems may be the same. I inspected the usage of hardware resources by docker containers, and at least in my case, it was the memory limit issue. Docker Desktop (in Ubuntu 22.04) ships with a default memory limit smaller than the size of LLM (gpt4all in my case). So I set the memory limit 10GB, large enough to have gpt4all, and then it worked.

It was difficult to figure out it was the memory limit issue because the error message does not deliver it directly. Also, I don’t know well about Docker, nor about LLMs, so it took some time for me to figure out the source of the problem in my machine. I think it will definitely help to include a note about increasing Docker’s memory limit enough to have LLM on memory in the getting started page: https://localai.io/basics/getting_started/index.html

Note that I also uncommented REBUILD=true in .env file. Also, increasing the memory of docker by including --memory when running container did not help either. At least in my machine, I needed to increase it in the Docker Desktop application, and it seems like a common confusion (see https://stackoverflow.com/a/44533437).

Yes same here. used the latest version with GPT4All model and it just gives errors. Same on Kubernetes and local