langchainjs: DynamicTool is called but the input is not passed in - undefined

I created a Dynamic Tool, it is getting called as expected, but it isn’t receiving the input.

I tried to follow the example here, but it doesn’t show how to pass input: https://js.langchain.com/docs/modules/agents/tools/

Here is the full code to reproduce the issue

import * as dotenv from 'dotenv';
dotenv.config();

import { ChatOpenAI } from 'langchain/chat_models/openai';
import { initializeAgentExecutorWithOptions } from 'langchain/agents';
import { DynamicTool } from 'langchain/tools';
import { inspect } from 'util';

let iteration = 0;

async function jobSearch(query) {
    console.log('jobSearch input:', query);
    return 'Search results: job1, job2, job3';
}

const callbacks = [
    {
        handleLLMStart: async (llm, prompts) => {
            iteration++;
            console.log(`${iteration} LLMStart, llm`, inspect(llm, { depth: null }));
            console.log(`${iteration} LLMStart, prompts`, inspect(prompts, { depth: null }));
        },
        handleLLMEnd: async (output) => {
            console.log(`${iteration} LLMEnd, output`, inspect(output, { depth: null }));
        },
        handleLLMError: async (err) => {
            log.error(`${iteration} LLMError, err`, inspect(err, { depth: null }));
        },
    },
];

export const run = async () => {
    const model = new ChatOpenAI({ temperature: 0, verbose: false, callbacks, modelName: 'gpt-4' });
    const tools = [
        new DynamicTool({
            name: 'jobSearch',
            description: `Useful for finding jobs. The input to this tool is a string containing location, jobTitle, salary and the the output will be a list of jobs that match the query.`,
            func: async (query) => {
                return await jobSearch(query);
            },
        }),
    ];

    const executor = await initializeAgentExecutorWithOptions(tools, model, {
        agentType: 'chat-zero-shot-react-description',
    });

    const input = `Search me test engineer jobs in London`;

    console.log(`Executing with input "${input}"...`);

    const result = await executor.call({ input });

    console.log(`Final answer after ${iteration} iterations:\n`);
    console.log(result.output);
};

run();

Using langchain version 0.0.68 with node v18.16.0.

Example output:

Executing with input "Search me test engineer jobs in London"...
1 LLMStart, llm { name: 'openai' }
1 LLMStart, prompts [
  'System: Answer the following questions as best you can. You have access to the following tools:\n' +
    '\n' +
    'jobSearch: Useful for finding jobs. The input to this tool is a string containing location, jobTitle, salary and the the output will be a list of jobs that match the query.\n' +
    '\n' +
    'The way you use the tools is by specifying a json blob, denoted below by $JSON_BLOB\n' +
    'Specifically, this $JSON_BLOB should have a "action" key (with the name of the tool to use) and a "action_input" key (with the input to the tool going here). \n' +
    'The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:\n' +
    '\n' +
    '```\n' +
    '{\n' +
    '  "action": "calculator",\n' +
    '  "action_input": "1 + 2"\n' +
    '}\n' +
    '```\n' +
    '\n' +
    'ALWAYS use the following format:\n' +
    '\n' +
    'Question: the input question you must answer\n' +
    'Thought: you should always think about what to do\n' +
    'Action: \n' +
    '```\n' +
    '$JSON_BLOB\n' +
    '```\n' +
    'Observation: the result of the action\n' +
    '... (this Thought/Action/Observation can repeat N times)\n' +
    'Thought: I now know the final answer\n' +
    'Final Answer: the final answer to the original input question\n' +
    '\n' +
    'Begin! Reminder to always use the exact characters `Final Answer` when responding.\n' +
    'Human: Search me test engineer jobs in London\n' +
    '\n'
]
1 LLMEnd, output {
  generations: [
    [
      {
        text: 'Thought: I need to use the jobSearch tool to find test engineer jobs in London.\n' +
          '\n' +
          'Action: \n' +
          '```\n' +
          '{\n' +
          '  "action": "jobSearch",\n' +
          '  "action_input": {\n' +
          '    "location": "London",\n' +
          '    "jobTitle": "test engineer",\n' +
          '    "salary": ""\n' +
          '  }\n' +
          '}\n' +
          '```\n',
        message: AIChatMessage {
          text: 'Thought: I need to use the jobSearch tool to find test engineer jobs in London.\n' +
            '\n' +
            'Action: \n' +
            '```\n' +
            '{\n' +
            '  "action": "jobSearch",\n' +
            '  "action_input": {\n' +
            '    "location": "London",\n' +
            '    "jobTitle": "test engineer",\n' +
            '    "salary": ""\n' +
            '  }\n' +
            '}\n' +
            '```\n',
          name: undefined
        }
      }
    ]
  ],
  llmOutput: {
    tokenUsage: { completionTokens: 64, promptTokens: 295, totalTokens: 359 }
  }
}
jobSearch input: undefined
2 LLMStart, llm { name: 'openai' }

Also see related issue: https://github.com/hwchase17/langchainjs/issues/1114

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 20

Most upvoted comments

Hi! I was with the same problem. After many attempts I found a possible solution, when you initialize the agent with initializeAgentExecutorWithOptions you must pass the returnIntermediateSteps attribute to true in the options.

The code would be like this:

const executor = await initializeAgentExecutorWithOptions(tools, model, {
  agentType: 'chat-zero-shot-react-description',
  returnIntermediateSteps: true,
});

Hope it works.

Hi, @Qarj,

I’m helping the langchainjs team manage their backlog and am marking this issue as stale.

From what I understand, the “jobSearch input: undefined” issue is related to the Dynamic Tool not receiving input as expected. There have been multiple reports of similar issues with different tools, and the community has suggested workarounds such as changing the tool description or initializing the agent with specific options. The underlying cause appears to be the framework expecting a string input but receiving an object instead. The issue remains unresolved, and the community is discussing potential workarounds and solutions.

Could you please confirm if this issue is still relevant to the latest version of the langchainjs repository? If it is, please let the langchainjs team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you!

The workaround suggested by @Qarj worked for me

const tools = [
    new DynamicTool({
        name: "blood-pressure-readings",
        description: `Useful tool to get the last N number of blood pressure readings for a patient as well as their name and their care manager's name. The input for this tool are double underscore separated values containing "patientId" and "numEntries" in that order and the the output will be a json string.`,
        func: async (toolInput) => {
            toolInput = toolInput.split('__');
            if (toolInput === undefined) {
                return "There is an input error. Please use the following schema: {patient_id: string, num_readings: number}";
            }
            const endpoint = `my endpoint with parameters provided in the toolInput array`;
            const response = await fetch(endpoint);
            const json = await response.json();
            return JSON.stringify(json);
        }
    })
];

A workaround for this problem is to change the tool description to something like:

     description: `Useful for finding jobs. The input to this tool are double underscore separated values containing location, jobTitle, salary in that order and the the output will be a list of jobs that match the query.`,

That way it won’t try to create a JSON object as the tool input, but the point of raising this issue is that it should be able to parse off a JSON object as tool input. The workaround for that at this stage is to write your own parser.