langchain: llm-math raising an issue

I’m testing out the tutorial code for Agents:

`from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.llms import OpenAI

llm = OpenAI(temperature=0) tools = load_tools([“serpapi”, “llm-math”], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent.run(“What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?”)`

And so far it generates the result:

> Entering new AgentExecutor chain... I need to find the temperature first, then use the calculator to raise it to the .023 power. Action: Search Action Input: "High temperature in SF yesterday" Observation: High: 60.8ºf @3:10 PM Low: 48.2ºf @2:05 AM Approx. Thought: I need to convert the temperature to a number Action: Calculator Action Input: 60.8

But raises an issue and doesn’t calculate 60.8^.023 raise ValueError(f"unknown format from LLM: {llm_output}") ValueError: unknown format from LLM: This is not a math problem and cannot be solved using the numexpr library.

What’s the reason behind this error?

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 16

Most upvoted comments

Ran into this issue as well. Resolved by changing the temperature from 0 to 0.9.

I gave up on langChain math tool because I think that is not easily tunable, making agents very difficult to ‘tame’. It would be better to write my own math tools to solve a very specific problems, and execute it myself. However, that would be dangerous if any of the answer was prompt-injected.

Too early for me to give up on it, I barely understand the details right now. Thanks though.

The problem is the LLM has returned expression that the numexpr package do not support - that’s an example and an inherent limitation - it’s called hallucination. Setting higher temperature won’t do us any good, as we are hoping it to jump around to write a correct code that functions.

thanks for breaking it down, I get it now. kind of.

I will try this implementation with a customised tool.

When dealing with math questions, you want to set the temperature close to 0 to prevent hallucination. Instead clarify in the tool description that the model only should input math expressions:

llm_math_chain = LLMMathChain(llm=llm, verbose=True)
math_tool = Tool.from_function(
        func=llm_math_chain.run,
        name="Calculator",
        description="Useful for when you need to answer questions about math. This tool is only for math questions and nothing else. Only input math expressions.",
    )

For exception TypeError: 'VariableNode' object is not callable: LLM-math chain uses numexpr to evaluate the expression. numexpr doesn’t support all operations yet, unfortunaltely round() is one among others: https://numexpr.readthedocs.io/en/latest/user_guide.html#supported-functions Instead I created a custom tool that evaluates the expression with eval:

class CalculatorTool(BaseTool):
    name = "CalculatorTool"
    
    description = """
    Useful for when you need to answer questions about math.
    This tool is only for math questions and nothing else.
    Formulate the input as python code.
    """

    def _run(self, question: str):
        return eval(question)
    
    def _arun(self, value: Union[int, float]):
        raise NotImplementedError("This tool does not support async")

hopefully, will explore more on this. As of now, I was just experimenting (read trying to mash up various things without knowing wth i’m doing most of the time)

When dealing with math questions, you want to set the temperature close to 0 to prevent hallucination. Instead clarify in the tool description that the model only should input math expressions:

llm_math_chain = LLMMathChain(llm=llm, verbose=True)
math_tool = Tool.from_function(
        func=llm_math_chain.run,
        name="Calculator",
        description="Useful for when you need to answer questions about math. This tool is only for math questions and nothing else. Only input math expressions.",
    )

For exception TypeError: 'VariableNode' object is not callable: LLM-math chain uses numexpr to evaluate the expression. numexpr doesn’t support all operations yet, unfortunaltely round() is one among others: https://numexpr.readthedocs.io/en/latest/user_guide.html#supported-functions

Instead I created a custom tool that evaluates the expression with eval:

class CalculatorTool(BaseTool):
    name = "CalculatorTool"
    
    description = """
    Useful for when you need to answer questions about math.
    This tool is only for math questions and nothing else.
    Formulate the input as python code.
    """

    def _run(self, question: str):
        return eval(question)
    
    def _arun(self, value: Union[int, float]):
        raise NotImplementedError("This tool does not support async")

Ran into this issue as well. Resolved by changing the temperature from 0 to 0.9.

It works