langchain: GPT4All chat error with async calls
Hi, I believe this issue is related to this one: #1372
I’m using GPT4All integration and get the following error after running ConversationalRetrievalChain
with AsyncCallbackManager
:
ERROR:root:Async generation not implemented for this LLM.
Changing to CallbackManager
does not fix anything.
The issue is model-agnostic, i.e., I have used ggml-gpt4all-j-v1.3-groovy.bin and ggml-mpt-7b-base.bin. The LangChain version I’m using is 0.0.179
. Any ideas how this can be potentially solved or should we just wait for a new release fixing it?
Suggestion:
Release a fix, similar as in #1372
About this issue
- Original URL
- State: open
- Created a year ago
- Reactions: 6
- Comments: 26
I am currently checking with the requirements for my pr to be ready for reviewing (format, linting, testing) I will finish them ASAP and this is my first contribution ever in an open source project 😃
what I did was implement _acall with support for async await, should I add just one test with async for this? is it enough to be accepted? https://github.com/khaledadrani/langchain/blob/32a041b8a2a5a8a6db36592b501e4ce9d54c219b/tests/unit_tests/llms/fake_llm.py
Edit; I also need to put a test here https://github.com/khaledadrani/langchain/blob/32a041b8a2a5a8a6db36592b501e4ce9d54c219b/tests/integration_tests/llms/test_gpt4all.py
@auxon I use it the following way
For GPT4All you can use this class in your projects
@khaledadrani eagerly waiting for it.
@charlod is the current implementation of ainvoke or acall (going to be deprecated) not working for you?
@baskaryan Could you please help @PiotrPmr with the issue they mentioned? They are still experiencing an error with async calls in the GPT4All chat integration and would appreciate your assistance. Thank you!
A workaround for using ConversationalRetrievalChain with llamaCpp is implemet _acall function. This is not tested extensively.
Then change
To