LocalAI: local-ai failed loading ggml-gpt4all-j model
Hi, I’m running local-ai in Kubernetes and download the model ggml-gpt4all-j in the same way as explained here, but got this error:
┌────────────────────────────────────────────────────┐
│ Fiber v2.44.0 │
│ http://127.0.0.1:8080 │
│ (bound on host 0.0.0.0 and port 8080) │
│ │
│ Handlers ............ 12 Processes ........... 1 │
│ Prefork ....... Disabled PID .............. 2975 │
└────────────────────────────────────────────────────┘
llama.cpp: loading model from /models/ggml-gpt4all-j.bin
error loading model: unexpectedly reached end of file
llama_init_from_file: failed to load model
I tried with latest and also v1.6.1 quay.io/go-skynet/local-ai image and got the same error.
Any ideas ?
About this issue
- Original URL
- State: closed
- Created a year ago
- Reactions: 5
- Comments: 17 (3 by maintainers)
In my case the problem was just the name of the model. Removing the
.binextension solved it:ggml-gpt4all-j.bin->ggml-gpt4all-j😅@fHachenberg i checked the file md5 after the download and it was correct - So you mean that it can be corrupted there https://gpt4all.io/models/ggml-gpt4all-j.bin ?
Sure, works well. Thank you for the curated collection of models 👍
Hi, guys. may you guys try the models in model gallery? https://github.com/go-skynet/model-gallery
Still no solution?