gpt4all: Issue: Unable to instantiate model error
Issue you’d like to raise.
I am trying to follow the basic python example. I have downloaded the model .bin
file as well from gpt4all.io: https://gpt4all.io/models/ggml-vicuna-13b-1.1-q4_2.bin
gptj = gpt4all.GPT4All(model_name='ggml-vicuna-13b-1.1-q4_2.bin', allow_download=False, model_path='/models/')
However it fails
Found model file at /models/ggml-vicuna-13b-1.1-q4_2.bin
Invalid model file
Traceback (most recent call last):
...
raise ValueError("Unable to instantiate model")
ValueError: Unable to instantiate model
Any help will be much appreciated.
Suggestion:
No response
About this issue
- Original URL
- State: closed
- Created a year ago
- Reactions: 16
- Comments: 77
Just for completeness, what system are you on, if I may ask? If it’s Linux, what distro and version?
I’m doing a few tests on Windows now with
gpt4all==0.3.0
from PyPI. All of these are “old” models from before the format change. Here’s what works:So looks like all LLaMA based “old” models cannot be loaded with the PyPI gpt4all v0.3.0.
Thanks for the tip! I just downgraded to 0.2.3 and it works now.
It appears that the Bin file was designed for an earlier iteration of GPT-4-All and is not compatible with the more recent version. Solution: pip show gpt4all pip uninstall gpt4all pip show gpt4all pip install gpt4all==0.2.3 If you come across this error with other models, consider attempting to downgrade the module you are currently utilizing.
I upgraded to
0.3.5
, and Nous Hermes now fully works!I have just installed v0.3.4 and am attempting to load the
hermes
model. I see the same error as the others have reported in this post:Unable to instantiate model (type=value_error)
I just tried to
ggml-gpt4all-l13b-snoozy.bin
and the error led me here. So you can check that one as well.I’m running
gpt4-all 0.3.0
onUbuntu 22.04
with Jupyter Notebook usingipykernel 6.23.1
,ipython 8.11.0
andpython 3.10.6
. Downgrading to 0.2.3 solved this issue.Establishing virtual Python environments offers a viable option to avoid version downgrades and operate within a self-contained setting.
@nsladen Clearing the pip cache apparently does not remove the bindings for C++. These are stored in a system or environment location, couldn’t find it. From what I gathered pip also doesn’t update them. Perhaps that’s why there are recommendations around to build from source using cmake and a compiler. Earlier you mentioned building bindings. Did you follow this article to build gpt4all from source then? If not it might be worth a shot, seems straightforward on Linux. Don’t know what this entails:
I still suspect the error’s cause can be sought in the direction of python to C++ bindings. The FAQ on the documentation site, as well as this Readme under the gpt4all-backend folder in this repo mentions there was a compatibility change for llama.cpp affecting gpt4all models. So, if you have an incompatible binding around that is still cached somewhere, maybe you could try to get a different version of that.
You can also take your .bin file upstream with
That’s my best shot at this now. Don’t know more about bindings and build processes yet. Probably it’s time to learn Docker first.
Thanks for the input. I built a new venv and ensured all wheels where downloaded fresh. The result is still the same.
Yes there is a C++ compiler installed and accessible through PATH environment variable.
Using
ubuntu:latest
works for me. Just need to installpython3.11
(and also-venv
) and then I can run through the test codeIf you also want to build
llama.cpp
, feel free to installgcc-11
,g++-11
, and so on, which are all available. This has struggled me for the whole day. Everything is so fine within a container 🙃I haven’t test further but at least the base model can be properly loaded
glibc
is the GNU C library and a fundamental part of the system. Unless you know what you’re doing, don’t mess with that. There are ways around it, but rebuilding yourself so that it sits on top of your ownglibc
is the obvious one. The error means that it was compiled for aglibc
that’s newer than yours. Not surprising, it looks like the base system they use for compilation is Ubuntu 22.04.I’ve seen someone else produce a segmentation fault somehow after building it on a RedHat 8, see: https://github.com/nomic-ai/gpt4all/issues/971#issuecomment-1590661079. But I can’t really troubleshoot that at the moment, I don’t have that or a related system at hand right now.
If you don’t have another system and can’t resolve the segmentation fault somehow, your next best bet would probably be to try it in some kind of container.
I’m trying to run it on: LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: RedHatEnterprise Description: Red Hat Enterprise Linux release 8.7 (Ootpa) Release: 8.7 Codename: Ootpa
I created a new venv and built from source using these instructions: https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-bindings/python/README.md
But when I run it, I get a segmentation fault:
I tried it with:
I tried 0.3.4, 0.3.3, 0.3.0 I get:
Invalid model file
When I try 0.2.3 I get
Let me know if anyone knows how to get GLIBC_2.29. I’m not sure if I would need to be a super user to install it or what to install.
Ok, I figured it out. The hermes file was only 27KB. I deleted the file and then reran the python code under gpt4all 0.3.4 and it worked. I then deleted the file and downgraded to 0.3.0 and I was able to recreate the 27KB file. I must have downloaded hermes before the 0.3.3 fix and then it kept the file and tried to use it when I upgraded to 0.3.4. Deleting the old file first seems to do the trick.
And you guys are sure the model checksums are correct and you have enough RAM to run them?
Because I tested
hermes
here on both Windows and Linux with v0.3.4 and it worked both times. In fact, the problem withhermes
wasn’t even that it wouldn’t load (v0.3.3 fixed that), but that you couldn’t talk to it more than once (v0.3.4 fixed that).You should really include details about your OS / distro / version, RAM, maybe even CPU. Because these problems are not the same as the initial errors we had in this issue.
Edit: Oh and while we’re at it: also Python version, how you installed it and whether you use a virtual environment. Maybe there’s a hint somewhere if you can provide all that extra info.
I’m familiar with that line by now:
That’s coming from Langchain. Try this:
Only when that works install Langchain again.
It’s very odd. Like you, I’m still getting the error on my Mint test VM (based on Ubuntu 22.04) with
gpt4all==0.3.2
, but building it manually from currentmain
branch makes it work. Not sure what exactly went wrong with the PyPI package.Downgrade to 0.2.3 solved my problem, but ggml-v3-13b-hermes-q5_1.bin still won’t load.
same here,
When using Groovy - everything’s fine. When trying Snoozy or Nous Hermes, get this type of error:
File “/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gpt4all/gpt4all.py”, line 41, in init self.model.load_model(model_dest) File “/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gpt4all/pyllmodel.py”, line 152, in load_model raise ValueError(“Unable to instantiate model”) ValueError: Unable to instantiate model
Please check if the checksum matches, garbled output like this had been result of a defective harddrive for me before.