langchainjs: ConversationalRetrievalQAChain factory method only supports QA stuff chain, fails on long inputs
Here is a snippet of my code:
const loader = new PDFLoader('./基于贝叶斯优化的无标签网络剪枝算法.pdf')
const pdf = await loader.load()
const textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 0
})
const docs = await textSplitter.splitDocuments(pdf)
const vectorStore = await PineconeStore.fromDocuments(
docs,
new OpenAIEmbeddings({
openAIApiKey: process.env.OPEN_API_KEY
}),
{
pineconeIndex
}
)`
`const vectorStore = await PineconeStore.fromExistingIndex(
new OpenAIEmbeddings({
openAIApiKey: process.env.OPEN_API_KEY
}),
{ pineconeIndex }
)
/* Create the chain */
const chain = ConversationalRetrievalQAChain.fromLLM(
model,
vectorStore.asRetriever(),
{ verbose: true }
)
const question =
'对于没有经过任何优化的高斯过程,n 个样本点时间复杂度大概是多少?'
try {
// your async function here
const res = await chain.call({ question, chat_history: [] })
console.log(res.text)
} catch (error) {
if (error.response) {
console.log(error.response.status)
console.log(error.response.data)
} else {
console.log(error.data.message)
}
}
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 16
It’s okay, I’ve got it covered. I really appreciate your help!
Awesome! I’m going to leave this open though because this should be configurable.
The error message you’re seeing means that your prompt is too many tokens, and it looks like that’s happening because the
ConversationalRetrievalQAChain.fromLLMmethod is only letting you use a specific type of chain with no option to change it:https://github.com/hwchase17/langchainjs/blob/main/langchain/src/chains/conversational_retrieval_chain.ts#L156
For now, can you try creating the conversation chain similarly to this?
Basically, using the constructor manually with a map reduce chain instead of the default stuff chain: https://github.com/hwchase17/langchainjs/blob/main/langchain/src/chains/conversational_retrieval_chain.ts#L62