langchain: NameError: Could not load Llama model from path

Traceback (most recent call last): File “c:\Users\Siddhesh\Desktop\llama.cpp\langchain_test.py”, line 10, in <module> llm = LlamaCpp(model_path=“C:\Users\Siddhesh\Desktop\llama.cpp\models\ggml-model-q4_0.bin”) File “pydantic\main.py”, line 339, in pydantic.main.BaseModel.init File “pydantic\main.py”, line 1102, in pydantic.main.validate_model File “C:\Users\Siddhesh\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\llms\llamacpp.py”, line 117, in validate_environment raise NameError(f"Could not load Llama model from path: {model_path}") NameError: Could not load Llama model from path: C:\Users\Siddhesh\Desktop\llama.cpp\models\ggml-model-q4_0.bin

I have tried with raw string, double \, and the linux path format /path/to/model - none of them worked.

The path is right and the model .bin file is in the latest ggml model format. The model format for llamacpp was recently changed from ggml to ggjt and the model files had to be recoverted into this format. Is the issue being caused because of this change?

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 6
  • Comments: 17

Most upvoted comments

pip3 install llama-cpp-python==0.1.49

I’m not sure all of these issues are actually the same, but I’ve hit the error shown above by @yahyaelganyni1 yesterday and noticed the following on llama-cpp-python home page:

image

I want to try utilizing GPU so I was following these useful instructions to get GPU/CUDA support (along with these) and got latest llama-cpp-python==0.1.83 installed. But after downgrading to the latest version BEFORE the critical/breaking one (i.e. llama-cpp-python==0.1.78), the error above has gone. But I could never conclude that based on the error message… Hope this will help someone.

Same, keep getting the error.