gpt4all: Unable to load Nous Hermes in Python

System Info

Python 3.9.6 MacOS GPT4All==0.2.3

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • backend
  • bindings
  • python-bindings
  • chat-ui
  • models
  • circleci
  • docker
  • api

Reproduction

  1. Using model listed on GPT4all import gpt4all model = gpt4all.GPT4All("ggml-v3-13b-hermes-q5_1.bin")

  2. Using custom model (ggml file from HF) import gpt4all model = gpt4all.GPT4All("ggml-nous-hermes-13b.ggmlv3.q5_1.bin", model_type="llama").

Expected behavior

Successful model load. Instead throws an error.

  1. Using model listed on GPT4all line 319, in get_model_from_name ValueError: No corresponding model for provided filename ggml-v3-13b-hermes-q5_1.bin. If this is a custom model, make sure to specify a valid model_type.

  2. Using a custom model error loading model: unknown (magic, version) combination; is this really a GGML file? llama_init_from_file: failed to load model

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 1
  • Comments: 25 (4 by maintainers)

Most upvoted comments

New pypi version out 0.3.3 with fix. Wanted to get this out before eod and only had time to test on mac and ubuntu 😕. Let me know if there are issues.

Just released fix 0.3.4 for hermes BOS issue (only run prompt once bug). Let’s start new issue/thread if you run into more problems related to that as the issue is separate from Unable to Load error.

Unable to instantiate Hermes even after upgrading to 0.3.2 on Ubuntu 22.04.2 LTS:

$ pip install gpt4all
...
Successfully installed gpt4all-0.3.2 tqdm-4.65.0

$ python3
...
>>> import gpt4all
>>> model = gpt4all.GPT4All("ggml-v3-13b-hermes-q5_1.bin")
...
Model downloaded at:  ~/.cache/gpt4all/ggml-v3-13b-hermes-q5_1.bin
Invalid model file
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File ".../gpt4all/gpt4all.py", line 41, in __init__
    self.model.load_model(model_dest)
  File ".../gpt4all/pyllmodel.py", line 152, in load_model
    raise ValueError("Unable to instantiate model")
ValueError: Unable to instantiate model

Fix was found. We are testing and deploying new versions right now. Trying pulling from main and rebuilding C library @RobertRappe. There was an issue with linking libraries in llmodel.

I am running gpt4all==0.3.3 on MacOS and have checked that the following models work fine when loading with model = gpt4all.GPT4All(filename) :

  • “ggml-gpt4all-j-v1.3-groovy.bin”
  • “ggml-mpt-7b-base.bin”
  • “ggml-mpt-7b-chat.bin”
  • “ggml-mpt-7b-instruct.bin”
  • “ggml-stable-vicuna-13B.q4_2.bin”
  • “ggml-wizard-13b-uncensored.bin”
  • “ggml-gpt4all-l13b-snoozy.bin”
  • “ggml-v3-13b-hermes-q5_1.bin”

Just reinstalled gpt4all==0.3.3. Works well now on MacOS. Thank you so much! I wait for a day and close this.

0.3.3 works on Windows btw. Appreciate the quick fix. I am on Win 10 w/ Python 3.11.3

Hi, I upgraded from 0.2.3 to 0.3.0, and (after downgrading) again from 0.2.3 to 0.3.2 in a virtualenv. I also installed 0.3.2 from scratch globally, and also uninstalled and reinstalled it. I got the same unable to instantiate model error every time on Ubuntu 22.04.

Uninstalling and reinstalling did not help.

I tested on Colab (https://colab.research.google.com/drive/1QRFHV5lj1Kb7_tGZZGZ-E6BfX6izpeMI with modified model name to test this), same error: grafik

Same error after upgrading to 0.3.2 on MacOS

Please update to the latest version with pip install --upgrade gpt4all