langchain: Memory not supported with sources chain?

Memory doesn’t seem to be supported when using the ‘sources’ chains. It appears to have issues writing multiple output keys.

Is there a work around to this?

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[13], line 1
----> 1 chain({ "question": "Do we have any agreements with INGRAM MICRO." }, return_only_outputs=True)

File [~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/chains/base.py:118](https://file+.vscode-resource.vscode-cdn.net/Users/jordanparker/helpmefindlaw/search-service/examples/notebooks/~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/chains/base.py:118), in Chain.__call__(self, inputs, return_only_outputs)
    116     raise e
    117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
--> 118 return self.prep_outputs(inputs, outputs, return_only_outputs)

File [~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/chains/base.py:170](https://file+.vscode-resource.vscode-cdn.net/Users/jordanparker/helpmefindlaw/search-service/examples/notebooks/~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/chains/base.py:170), in Chain.prep_outputs(self, inputs, outputs, return_only_outputs)
    168 self._validate_outputs(outputs)
    169 if self.memory is not None:
--> 170     self.memory.save_context(inputs, outputs)
    171 if return_only_outputs:
    172     return outputs

File [~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/memory/summary_buffer.py:59](https://file+.vscode-resource.vscode-cdn.net/Users/jordanparker/helpmefindlaw/search-service/examples/notebooks/~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/memory/summary_buffer.py:59), in ConversationSummaryBufferMemory.save_context(self, inputs, outputs)
     57 def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
     58     """Save context from this conversation to buffer."""
---> 59     super().save_context(inputs, outputs)
     60     # Prune buffer if it exceeds max token limit
     61     buffer = self.chat_memory.messages

File [~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/memory/chat_memory.py:37](https://file+.vscode-resource.vscode-cdn.net/Users/jordanparker/helpmefindlaw/search-service/examples/notebooks/~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/memory/chat_memory.py:37), in BaseChatMemory.save_context(self, inputs, outputs)
...
---> 37         raise ValueError(f"One output key expected, got {outputs.keys()}")
     38     output_key = list(outputs.keys())[0]
     39 else:

ValueError: One output key expected, got dict_keys(['answer', 'sources'])

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 30
  • Comments: 28 (1 by maintainers)

Commits related to this issue

Most upvoted comments

i found the solution by reading the source code: memory = ConversationSummaryBufferMemory(llm=llm, input_key=‘question’, output_key=‘answer’)

same issue with ConversationalRetrievalChain

Can confirm it is working for ConversationBufferMemory too.

memory = ConversationBufferMemory(memory_key="chat_history", input_key='question', output_key='answer', return_messages=True)

Thanks a bunch!

Do the following:

  1. Create memory with input_key and output_key: memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, input_key="question", output_key="answer")
  2. Initialize ConversationalRetrievalChain with memory: qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(max_tokens=512, model="gpt-3.5-turbo"), retriever=retriever, return_source_documents=True, memory=memory)
  3. Make a query to the QA using the input_key: qa({"question": prompt})

In my side, I was trying to keep this two argument return_source_documents=True and return_generated_question=True. I’ve found a solution that works for me. In BaseChatMemory source code I delete two line with a raise function.

if len(outputs) != 1:
    raise ValueError(f"One output key expected, got {outputs.keys()}")

This allow me to conserve "source_documents and “generated_question” inside the output without breaking the code. So to change the source code you just have to run the code below.

import langchain
from typing import Dict, Any, Tuple
from langchain.memory.utils import get_prompt_input_key

def _get_input_output(
    self, inputs: Dict[str, Any], outputs: Dict[str, str]
) -> Tuple[str, str]:
    if self.input_key is None:
        prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
    else:
        prompt_input_key = self.input_key
    if self.output_key is None:
        output_key = list(outputs.keys())[0]
    else:
        output_key = self.output_key
    return inputs[prompt_input_key], outputs[output_key]
  
langchain.memory.chat_memory.BaseChatMemory._get_input_output = _get_input_output

Here, the original method : https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/memory/chat_memory.py#L11

with RetrievalQA.from_chain_type() you can use memory. To avoid ValueError: One output key expected, got dict_keys([‘answer’, ‘sources’]), you need to specify key values in memory function (e.g. ConversationBufferMemory(memory_key=“chat_history”, return_messages=True, input_key=‘query’, output_key=‘result’)). It would be noce to add this in official documentation, because it looks like it’s not possible or you can do it only with ConversationalRetrievalChain.from_llm(). Issue can now be closed @hwchase17

I propose a solution.

“langchain/agents/agent.py” is the class from which all the extension chains mentioned above are derived.

    @property
    @abstractmethod
    def input_keys(self) -> List[str]:
        """Return the input keys.

        :meta private:
        """
    @property
    def output_keys(self) -> List[str]:
        """Return the singular output key.

        :meta private:
        """
        if self.return_intermediate_steps:
            return self.agent.return_values + ["intermediate_steps"]
        else:
            return self.agent.return_values

All memory-related objects return a key that exists through the above method, but when passing these keys to the output parser, only the memory key is not passed, so the functions implemented in each agent are unnecessary depending on the purpose. Useless Key values ​​must be excluded.

like…

    @property
    def input_keys(self) -> List[str]:
        """Return the input keys.

        :meta private:
        """
        return list(set(self.llm_chain.input_keys) - {"agent_scratchpad"})

The above source is

defintion of “Agent(BaseSingleActionAgent)”

Key values ​​to be excluded from the methods mentioned above are also accepted as arguments, so clear unification of input_key and output_key is necessary to prevent branching problems in each chain. The same method is already implemented differently in many chains, which continues to create errors in related chains.

anyone know how to get this to work with an Agent? got it to work as a standalone chain but still get lib/python3.9/site-packages/langchain/chains/base.py", line 133, in _chain_type raise NotImplementedError("Saving not supported for this chain type.") NotImplementedError: Saving not supported for this chain type.

Adding the output_key as above worked for me also.

Seems to be similar to https://github.com/hwchase17/langchain/issues/2068#issuecomment-1494537932

You probably have to define what your output_key actually is to get the chain to work