langchainjs: The ChatOpenAI message parser does not handle OpenAI sending "content" along with any function calls
I’ve spent a bit of time debugging this, but when OpenAI responds with both content and a function call, the parser (or systems that consume the output of the parser) ignore the content field.
Example response where the content is ignored by Langchain, though the function works as expected:
[
{
"index": 0,
"message": {
"role": "assistant",
"content": "First interaction with the user! Let's make it count. Start with a warm greeting and an interesting question to engage them.",
"function_call": {
"name": "send_message",
"arguments": "{\"message\":\"Hello! It's wonderful to meet you. How would you describe the world in three words?\"}"
}
},
"finish_reason": "function_call"
}
]
This is important for the implementation I’m working on as I use a function (send_message) for having the LLM talk to the user, and their normal response is a thought/workspace for them to plan. With the current behavior I have no way to get that thought from the LLM.
About this issue
- Original URL
- State: closed
- Created 8 months ago
- Comments: 21
@jacoblee93, it seems that we have a more systemic issue here that requires your expertise to figure out how all these components should work together and if we can account for the original problem.
🤖
Thank you for bringing this to our attention. It seems like the parser is currently designed to handle either
contentor afunction_call, but not both at the same time. To modify the parser to handle both, you would need to adjust the parsing logic in theoutput_parser.tsfile.Here’s a rough idea of how you could modify the
parsemethod to handle bothcontentandfunction_call:In this code, we first call the
super.parse(output)method to get the initial parsed output. Then, we check if the message includes afunction_callorcontentand add them to the parsed output if they exist.Please note that this is a simplified example and you might need to adjust it to fit your specific needs. Also, remember to update the
ParsedOutputinterface to include thecontentfield:This change should allow the parser to handle cases where OpenAI responds with both
contentand a function call. Please let me know if you have any questions or need further clarification.Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Thanks for the help @jacoblee93. I’ll update the thread on what I ended up doing just for completeness in case anyone else has a similar issue.
🤖
Based on your request, you can indeed implement a wrapper for the OpenAI model in the LangChain JS framework to intercept responses and split them into separate messages when both
contentandfunction_callare present in the same message. This can be achieved by modifying theopenAIResponseToChatMessagefunction and_generatefunction in theChatOpenAIclass.Here’s how you can do it:
openAIResponseToChatMessagefunction to return an array of messages instead of a single message when bothcontentandfunction_callare present:_generatefunction in theChatOpenAIclass to handle the array of messages returned byopenAIResponseToChatMessage:This way, when both
contentandfunction_callare present in the same message, they will be split into separate messages. This should allow theparsemethod and theAgentExecutorto handle each message correctly.You can find the relevant code in the langchain/src/chat_models/openai.ts file in the langchainjs repository.
Please note that this modification should be done carefully to avoid any unintended side effects. Make sure to thoroughly test your application after implementing these changes to ensure everything works as expected.
Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
🤖
Based on the information you’ve provided and the context from the repository, it seems like you’re encountering an issue where the
contentfield is being ignored when bothcontentand afunction_callare present in the OpenAI response.A similar issue was previously encountered and resolved in the LangChain JS repository. The issue was titled Langchain JS PromptLayerChatOpenAI doesnt recognize message with type == “function”. The solution involved modifying the
messageTypeToOpenAIRoleandopenAIResponseToChatMessagefunctions to handle the “function” message type. This change was implemented in pull request #1948.However, you’ve mentioned that you’re looking for a solution that works with the agent executor without modifying the internals. The
AgentExecutorclass in theexecutor.tsfile handles the output of theparsemethod through its_callmethod. If anOutputParserExceptionis thrown during the execution of the agent’s plan, theAgentExecutorhandles it based on thehandleParsingErrorsproperty. This could potentially be a place to handle the issue you’re facing.Unfortunately, without modifying the internals of the LangChain JS framework, it might be challenging to achieve the desired behavior. The framework’s design seems to assume that a message will either contain
contentor afunction_call, but not both.If you have control over the OpenAI responses, one workaround could be to ensure that the
contentandfunction_callare not included in the same message. Instead, you could split them into two separate messages, each with its own role. This way, theparsemethod and theAgentExecutorshould be able to handle each message correctly.I hope this helps! If you have any other questions or need further clarification, feel free to ask.
Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.