langchain: `initialize_agent` does not work with `return_intermediate_steps=True`

E.g. running

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True, return_intermediate_steps=True)
agent.run("What is 2 raised to the 0.43 power?")

gives the error

    203 """Run the chain as text in, text out or multiple variables, text out."""
    204 if len(self.output_keys) != 1:
--> 205     raise ValueError(
    206         f"`run` not supported when there is not exactly "
    207         f"one output key. Got {self.output_keys}."
    208     )
    210 if args and not kwargs:
    211     if len(args) != 1:

ValueError: `run` not supported when there is not exactly one output key. Got ['output', 'intermediate_steps'].

Is this supposed to be called differently or how else can the intermediate outputs (“Observations”) be retrieved?

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 6
  • Comments: 18 (1 by maintainers)

Most upvoted comments

Another evidence:

from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain.memory import ConversationBufferMemory



llm = OpenAI(
    temperature=0,
    model_name="text-davinci-002",
    openai_api_key="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",
)
tools = []

agent = initialize_agent(
    tools,
    llm,
    agent="zero-shot-react-description",
    verbose=True,
    return_intermediate_steps=True,
    memory=ConversationBufferMemory(memory_key="chat_history"),
)

response = agent(
    {
        "input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
    }
)

This raises the same error. The problem seems to be the memory.

Define the input and output keys in your memory when you initialize the agent like this:

agent = initialize_agent(
    tools,
    llm,
    agent="zero-shot-react-description",
    verbose=True,
    return_intermediate_steps=True,
    memory=ConversationBufferMemory(memory_key="chat_history", input_key='input', output_key="output")
)

Call the agent directly on the input like this:

agent("What is 2 raised to the 0.43 power?")

This will return a dict with keys “input”, “output”, and “intermediate_steps”.

Another evidence:

from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain.memory import ConversationBufferMemory



llm = OpenAI(
    temperature=0,
    model_name="text-davinci-002",
    openai_api_key="sk-",
)
tools = []

agent = initialize_agent(
    tools,
    llm,
    agent="zero-shot-react-description",
    verbose=True,
    return_intermediate_steps=True,
    memory=ConversationBufferMemory(memory_key="chat_history"),
)

response = agent(
    {
        "input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
    }
)

This raises the same error. The problem seems to be the memory.

I just tried running agent("test"), agent(input='test'), agent(dict(input='test')) and all of them raise errors.

agent("test") and agent(dict(input='test')) raise:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "~/Library/Caches/pypoetry/virtualenvs/langflow-zotWOIqD-py3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 118, in __call__
    return self.prep_outputs(inputs, outputs, return_only_outputs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/Library/Caches/pypoetry/virtualenvs/langflow-zotWOIqD-py3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 170, in prep_outputs
    self.memory.save_context(inputs, outputs)
  File "~/Library/Caches/pypoetry/virtualenvs/langflow-zotWOIqD-py3.11/lib/python3.11/site-packages/langchain/memory/chat_memory.py", line 28, in save_context
    raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['output', 'intermediate_steps'])

And agent(input='test') raises:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
TypeError: Chain.__call__() got an unexpected keyword argument 'input'

I faced the same issue while using the chat-conversational-react-description agent. I tried overriding the ConversationBufferMemory as suggested in #3091

Solution that worked for me: My Langchain execution utilizes the ConversationBufferMemory with memory key as chat-history

  • Initializing the agent The initialize_agent returns an AgentExecutor object so it’s important to have the return_intermediate_steps = True for the executor
    agent = initialize_agent(
        agent='chat-conversational-react-description',
        tools=tools,
        llm=ChatOpenAI(temperature=0.0),
        memory=memory,
        return_intermediate_steps= True, # Make sure you set it to True
        verbose=True
    )
  • Memory allocation I found that setting input_key and output_key helps to get a dictionary output with intermediate_steps. Also make sure that return_messages=True

memory = ConversationBufferMemory(memory_key="chat_history", input_key="input", output_key="output",return_messages=True)

  • Executing the chain Since we changed the input_key in the previous step, executing the chain will now have the following format
response = chain({"input":query})
print(response['intermediate_steps'])

Hope this helps!

I’ve the same issue with create_pandas_dataframe_agent (that unfortunaltely cannot be solved with your call to initialize_agent @wct432

I tried by replacing the AgentExecutor.from_agent_and_tools in create_pandas_dataframe_agent with a initialize_agent, but I don’t know how to pass the prompt into initialize_agent

cc @hwchase17

Is there any way to make this work with SQLDatabaseChain.from_llm()? When setting return_intermediate_steps=True in the chain, returns the error ERROR:root:'run' not supported when there is not exactly one output key. Got ['result', 'intermediate_steps']. The suggested workarounds in the thread still don’t work 😦

sql_db_chain = SQLDatabaseChain.from_llm(
    llm,
    db,
    prompt=few_shot_prompt,
    use_query_checker=False, 
    verbose=True,
    return_intermediate_steps=True,
)

sql_tool = Tool(
    name='SQL tool',
    func=sql_db_chain.run,
    description="..."
)

tools = load_tools(
    ["llm-math"],
    llm=llm
)
tools.append(sql_tool)

conversational_agent = initialize_agent(
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, 
    tools=tools, 
    llm=llm,
    verbose=True,
    memory=ConversationBufferMemory(memory_key="chat_history", input_key='input', output_key="output", return_messages=True),
)

Hey @ogabrielluiz, you better change that API key 😉

I tested it too and it works. Thanks, @wct432!

Thanks @wct432 ! Is it in the docs? I could not find the section mentioning that approach