localGPT: pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain llm none is not an allowed value (type=type_error.none.not_allowed)

When i run localGPT.py i get the below error:

2023-09-27 14:49:29,036 - INFO - run_localGPT.py:221 - Running on: cuda
2023-09-27 14:49:29,036 - INFO - run_localGPT.py:222 - Display Source Documents set to: False
2023-09-27 14:49:29,036 - INFO - run_localGPT.py:223 - Use history set to: False
2023-09-27 14:49:29,316 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
max_seq_length  512
2023-09-27 14:49:32,007 - INFO - posthog.py:16 - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2023-09-27 14:49:32,066 - INFO - run_localGPT.py:56 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cuda
2023-09-27 14:49:32,066 - INFO - run_localGPT.py:57 - This action can take a few minutes!
2023-09-27 14:49:32,066 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models
Traceback (most recent call last):
  File "/home/admin1/Documents/chatgpt/localgpt_llama2/run_localGPT.py", line 258, in <module>
    main()
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/home/admin1/Documents/chatgpt/localgpt_llama2/run_localGPT.py", line 229, in main
    qa = retrieval_qa_pipline(device_type, use_history, promptTemplate_type="llama")
  File "/home/admin1/Documents/chatgpt/localgpt_llama2/run_localGPT.py", line 144, in retrieval_qa_pipline
    qa = RetrievalQA.from_chain_type(
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py", line 100, in from_chain_type
    combine_documents_chain = load_qa_chain(
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/langchain/chains/question_answering/__init__.py", line 249, in load_qa_chain
    return loader_mapping[chain_type](
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/langchain/chains/question_answering/__init__.py", line 73, in _load_stuff_chain
    llm_chain = LLMChain(
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/langchain/load/serializable.py", line 74, in __init__
    super().__init__(**kwargs)
  File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain
llm
  **none is not an allowed value (type=type_error.none.not_allowed)**

Has anyone encountered this error. If yes any workaround for the same?

About this issue

  • Original URL
  • State: open
  • Created 9 months ago
  • Reactions: 13
  • Comments: 21 (1 by maintainers)

Most upvoted comments

It´s the some error from this post:

https://github.com/PromtEngineer/localGPT/issues/501

Try:

set CMAKE_ARGS=-DLLAMA_CUBLAS=on

set FORCE_CMAKE=1

And now, if you are using a GGUF language model then:

pip install llama-cpp-python==0.1.83

If you using a GGML:

pip install llama-cpp-python==0.1.76

In constants.py is the model you are using. The default is:

MODEL_ID = “TheBloke/Llama-2-7b-Chat-GGUF” MODEL_BASENAME = “llama-2-7b-chat.Q4_K_M.gguf”

after llama-cpp-python to requirements.txt it worked for me

Nothing worked for me after installing llama-cpp-python also

Running ‘pip install llama-cpp-python’ solved the exact same errors for me. You could also just add ‘lama-cpp-python’ to your requirements.txt and run it as suggested above by @SciTechEnthusiast

if you are using in windows environment use pip install llama-cpp-python==0.1.83 for GGUF models.

Any help on that? Nothing from the above has worked for me.

did you got the answer

Got the same error just now