langchain: ValueError(f"Could not parse LLM output: `{llm_output}`")

File "C:\Program Files\Python\Python310\lib\site-packages\langchain\chains\base.py", line 268, in run return self(kwargs)[self.output_keys[0]] File "C:\Program Files\Python\Python310\lib\site-packages\langchain\chains\base.py", line 168, in __call__ raise e File "C:\Program Files\Python\Python310\lib\site-packages\langchain\chains\base.py", line 165, in __call__ outputs = self._call(inputs) File "C:\Program Files\Python\Python310\lib\site-packages\langchain\agents\agent.py", line 503, in _call next_step_output = self._take_next_step( File "C:\Program Files\Python\Python310\lib\site-packages\langchain\agents\agent.py", line 406, in _take_next_step output = self.agent.plan(intermediate_steps, **inputs) File "C:\Program Files\Python\Python310\lib\site-packages\langchain\agents\agent.py", line 102, in plan action = self._get_next_action(full_inputs) File "C:\Program Files\Python\Python310\lib\site-packages\langchain\agents\agent.py", line 64, in _get_next_action parsed_output = self._extract_tool_and_input(full_output) File "C:\Program Files\Python\Python310\lib\site-packages\langchain\agents\conversational\base.py", line 84, in _extract_tool_and_input raise ValueError(f"Could not parse LLM output:{llm_output}") ValueError: Could not parse LLM output: Thought: Do I need to use a tool? Yes Action: Use the requests library to write a Python code to do a post request Action Input:

import requests

url = 'https://example.com/api'
data = {'key': 'value'}

response = requests.post(url, data=data)

print(response.text)

``

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Reactions: 16
  • Comments: 23 (3 by maintainers)

Most upvoted comments

I hope it’s not too silly an idea, and looking at the code I’m not sure where such an intervention would take place, but: How about, instead of raising the error

        raise ValueError(f"Could not parse LLM output: `{llm_output}`")

, the answer given by the llm was sent back to it complaining that it does not have the right format, and asking it to comply with it?

Sometimes the LLM ignores the instructions on the format which breaks the entire thing…like it clearly says the last line should be "Final Answer: " but if it comes back without that you are in trouble.

I hope the new chat messages format will eliminate the need to parse the reply text

It is possible that this is caused due to the nature of the current implementation, which puts all the prompts into the user role in ChatGPT. I ran into the same issue when using the SQL agent. To debug it, i took the text that is being sent to the LLM, but manually broke it down to ChatGPT messages style, meaning, to the system role I’ve added the agent definition, followed by the user role with the “begin! and question” part, followed by the assistant role with the “Thought” part, this resulted in a proper output that can be parsed.

It is plausible to assume that if/when the code will be updated to support chat format with system/user/assistant roles, in these agents, it will work again, until then, we can still use the (really good) davinchi model.

Thanks team!

can you show the code for that?

Super hacky, but I appended " The response should include the prefix 'AI: <response>'." and that resolved some of my prompts that were breaking. 🤷

🤖

Based on the traceback you’ve provided, it seems like the error is originating from the _extract_tool_and_input method in the ConversationalAgent class. This method is responsible for parsing the output of the language model and extracting the tool and input for the next step in the chain.

The error message “Could not parse LLM output” suggests that the method was unable to parse the output of the language model. This could be due to the structure or format of the output not being what the method expects.

Here is the relevant code:

class ConversationalAgent(Agent):
    ...
    def _extract_tool_and_input(self, llm_output: str) -> Tuple[str, Dict[str, Any]]:
        ...
        raise ValueError(f"Could not parse LLM output: `{llm_output}`")

In this method, if the parsing fails for any reason, it raises a ValueError with the message “Could not parse LLM output”.

To resolve this issue, you might need to check the output of your language model and ensure it’s in a format that the _extract_tool_and_input method can parse. If it’s not, you might need to modify the output of your language model or override the _extract_tool_and_input method in your own subclass of ConversationalAgent to handle your specific output format.

I hope this helps! If you have any more questions or need further clarification, feel free to ask.

Sources:

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

This can happen if the model does not follow the given instructions. In most of the cases I have seen, it would proabably be better to simply return the output instead of throwing a ValueError when the output cannot be parsed according to the instructions. I will test this strategy further and maybe make a PR.

It is possible that this is caused due to the nature of the current implementation, which puts all the prompts into the user role in ChatGPT. I ran into the same issue when using the SQL agent. To debug it, i took the text that is being sent to the LLM, but manually broke it down to ChatGPT messages style, meaning, to the system role I’ve added the agent definition, followed by the user role with the “begin! and question” part, followed by the assistant role with the “Thought” part, this resulted in a proper output that can be parsed.

It is plausible to assume that if/when the code will be updated to support chat format with system/user/assistant roles, in these agents, it will work again, until then, we can still use the (really good) davinchi model.

Thanks team!

This format_instructions prompt helped a bit to get the correct response format, open for suggestions on improvement: format_instructions = """You must use the following format for all responses or your response will be considered incorrect:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do based on the tool capabilities available to you\nAction: the action to take, should only be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat 5 times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\nIf you can't find the answer, say 'I am unable to find the answer.'"""