langchain: Broken intermediate output / parsing is grossly unreliable

Traceback (most recent call last):
  File "/home/gptbot/cogs/search_service_cog.py", line 322, in on_message
    response, stdout_output = await capture_stdout(
  File "/home/gptbot/cogs/search_service_cog.py", line 79, in capture_stdout
    result = await func(*args, **kwargs)
  File "/usr/lib/python3.9/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 213, in run
    return self(args[0])[self.output_keys[0]]
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 116, in __call__
    raise e
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 113, in __call__
    outputs = self._call(inputs)
  File "/usr/local/lib/python3.9/dist-packages/langchain/agents/agent.py", line 807, in _call
    output = self.agent.return_stopped_response(
  File "/usr/local/lib/python3.9/dist-packages/langchain/agents/agent.py", line 515, in return_stopped_response
    full_output = self.llm_chain.predict(**full_inputs)
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 151, in predict
    return self(kwargs)[self.output_key]
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 116, in __call__
    raise e
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 113, in __call__
    outputs = self._call(inputs)
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 57, in _call
    return self.apply([inputs])[0]
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 118, in apply
    response = self.generate(input_list)
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 61, in generate
    prompts, stop = self.prep_prompts(input_list)
  File "/usr/local/lib/python3.9/dist-packages/langchain/chains/llm.py", line 79, in prep_prompts
    prompt = self.prompt.format_prompt(**selected_inputs)
  File "/usr/local/lib/python3.9/dist-packages/langchain/prompts/chat.py", line 127, in format_prompt
    messages = self.format_messages(**kwargs)
  File "/usr/local/lib/python3.9/dist-packages/langchain/prompts/chat.py", line 186, in format_messages
    message = message_template.format_messages(**rel_params)
  File "/usr/local/lib/python3.9/dist-packages/langchain/prompts/chat.py", line 43, in format_messages
    raise ValueError(
ValueError: variable agent_scratchpad should be a list of base messages, got {
    "action": "Search-Tool",
    "action_input": "Who is Harald Baldr?"
}

Most of the time the agent can’t parse it’s own tool usage.

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 2
  • Comments: 15 (3 by maintainers)

Most upvoted comments

i don’t think it has been fixed. Tried out latest version 0.1.0. Still getting the same error when using early_stopping_method=“generate” and max_iteration. Can anybody help with the patch ?

I can confirm the issue still exists in version 0.0.302

@treppers

I have a pull request out for this.

The issue is that the agent doesn’t always respond with proper markdown which makes the JSON invalid OR it doesn’t include the json at the beginning of the markdown.

PR #4539

You can hot patch this by adding this output parser and then calling this new output parser as an argument. You’ll need to do something like this to include hotpatch (FYI didn’t test this code below so YMMV):

I tried your patch with the new ConvoOutputParser class; however, I’m still getting the same error:

  File "/home/gene/endpoints/venv/lib/python3.10/site-packages/langchain/prompts/chat.py", line 43, in format_messages
    raise ValueError(
ValueError: variable agent_scratchpad should be a list of base messages, got {
    "action": "Price Lookup",
    "action_input": "Marble Madness"
}

Interestingly, on the latest langchain, this error only occurs when hitting the early_stop (ie. max_iterations=1) and early_stopping_method=“generate”, when it hits the stop:

Thought:

I now need to return a final answer based on the previous steps:

However, using the default early_stopping_method=“force” causes no error and just returns the default string informing that the agent hit the max iterations.

I have the same error in the ConversationalChatAgent with most of my inputs.

facing this exact same issue using chat conversational react description

I’m facing exactly the same issue. If the max_iterations is greater than 1, the agent raises ValueError

raise ValueError(
ValueError: variable agent_scratchpad should be a list of base messages, got {
    "action": "Conversation Knowledgebase",
    "action_input": "Can you please be more specific about what you need help with in the game?"
}

Have this same issue. this needs to be prioritized, conversational agents are absolutely broken for any sort of robust use case.

@hwchase17 any insights on how to fix the bug to make it compatible with the current version?