llama_index: [Bug]: KnowledgeGraphQueryEngine fails with AttributeError
Bug Description
Both docs/examples/query_engine/knowledge_graph_query_engine.ipynb and docs/examples/query_engine/knowledge_graph_rag_query_engine.ipynb examples are failing due to following error: AttributeError: 'NoneType' object has no attribute 'kwargs' on KnowledgeGraphQueryEngine().query().
Version
llama-index==0.10.7
Steps to Reproduce
First occurrence:
- Start running the notebook:
docs/examples/query_engine/knowledge_graph_query_engine.ipynb - You will experience
AttributeError: 'NoneType' object has no attribute 'kwargs'when runningKnowledgeGraphQueryEngine().query()withllama-index==0.10.7.
Second occurrence:
- Start running the notebook:
docs/examples/query_engine/knowledge_graph_rag_query_engine.ipynb - You will experience
WARNING:llama_index.core.indices.knowledge_graph.retrievers:Error in retrieving from nl2graphquery: 'NoneType' object has no attribute 'kwargs'when runningquery_engine_with_nl2graphquery.query()withllama-index==0.10.7.
Relevant Logs/Tracbacks
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[28], line 1
----> 1 response = query_engine.query(
2 "Tell me about Peter Quill?",
3 )
4 display(Markdown(f"<b>{response}</b>"))
File ~/Experimental/jupyter-ws/.venv/lib/python3.11/site-packages/llama_index/core/base/base_query_engine.py:40, in BaseQueryEngine.query(self, str_or_query_bundle)
38 if isinstance(str_or_query_bundle, str):
39 str_or_query_bundle = QueryBundle(str_or_query_bundle)
---> 40 return self._query(str_or_query_bundle)
File ~/Experimental/jupyter-ws/.venv/lib/python3.11/site-packages/llama_index/core/query_engine/knowledge_graph_query_engine.py:199, in KnowledgeGraphQueryEngine._query(self, query_bundle)
195 """Query the graph store."""
196 with self.callback_manager.event(
197 CBEventType.QUERY, payload={EventPayload.QUERY_STR: query_bundle.query_str}
198 ) as query_event:
--> 199 nodes: List[NodeWithScore] = self._retrieve(query_bundle)
201 response = self._response_synthesizer.synthesize(
202 query=query_bundle,
203 nodes=nodes,
204 )
206 if self._verbose:
File ~/Experimental/jupyter-ws/.venv/lib/python3.11/site-packages/llama_index/core/query_engine/knowledge_graph_query_engine.py:154, in KnowledgeGraphQueryEngine._retrieve(self, query_bundle)
152 def _retrieve(self, query_bundle: QueryBundle) -> List[NodeWithScore]:
153 """Get nodes for response."""
--> 154 graph_store_query = self.generate_query(query_bundle.query_str)
155 if self._verbose:
156 print_text(f"Graph Store Query:\n{graph_store_query}\n", color="yellow")
File ~/Experimental/jupyter-ws/.venv/lib/python3.11/site-packages/llama_index/core/query_engine/knowledge_graph_query_engine.py:132, in KnowledgeGraphQueryEngine.generate_query(self, query_str)
129 """Generate a Graph Store Query from a query bundle."""
130 # Get the query engine query string
--> 132 graph_store_query: str = self._llm.predict(
133 self._graph_query_synthesis_prompt,
134 query_str=query_str,
135 schema=self._graph_schema,
136 )
138 return graph_store_query
File ~/Experimental/jupyter-ws/.venv/lib/python3.11/site-packages/llama_index/core/llms/llm.py:249, in LLM.predict(self, prompt, **prompt_args)
243 def predict(
244 self,
245 prompt: BasePromptTemplate,
246 **prompt_args: Any,
247 ) -> str:
248 """Predict."""
--> 249 self._log_template_data(prompt, **prompt_args)
251 if self.metadata.is_chat_model:
252 messages = self._get_messages(prompt, **prompt_args)
File ~/Experimental/jupyter-ws/.venv/lib/python3.11/site-packages/llama_index/core/llms/llm.py:170, in LLM._log_template_data(self, prompt, **prompt_args)
165 def _log_template_data(
166 self, prompt: BasePromptTemplate, **prompt_args: Any
167 ) -> None:
168 template_vars = {
169 k: v
--> 170 for k, v in ChainMap(prompt.kwargs, prompt_args).items()
171 if k in prompt.template_vars
172 }
173 with self.callback_manager.event(
174 CBEventType.TEMPLATING,
175 payload={
(...)
180 },
181 ):
182 pass
AttributeError: 'NoneType' object has no attribute 'kwargs'
About this issue
- Original URL
- State: open
- Created 4 months ago
- Reactions: 4
- Comments: 16
🤖
Hey @eercanayar, great to see you back! Hope you’ve been well.
Thank you for reporting this issue. It seems like the error is originating from the
LLM.predict()method, specifically when it tries to log the template data. The error messageAttributeError: 'NoneType' object has no attribute 'kwargs'suggests that thepromptobject passed to thepredict()method isNone.The
predict()method is part of theLLMPredictorclass and is used to predict the answer to a query. It takes apromptof typeBasePromptTemplateand optionaloutput_clsof typeBaseModelalong with other arguments. The method first logs the template data and then proceeds to generate a response based on whether the LLM is a chat model or not.The
promptobject is expected to have akwargsattribute, which is used in the_log_template_data()method. If thepromptobject isNone, it would not have thekwargsattribute, leading to theAttributeErroryou’re seeing.The
KnowledgeGraphQueryEngine()._query()method, which is where theLLM.predict()method is called, is used to query the graph store. It retrieves nodes for the response and synthesizes a response from the retrieved nodes. Thepromptobject is likely generated in thegenerate_query()method of theKnowledgeGraphQueryEngineclass.Given this, there are a few things that could be causing this issue:
promptobject is not being correctly generated in thegenerate_query()method.promptobject is being overwritten or set toNonesomewhere before thepredict()method is called.LLM.predict()method that is causing thepromptobject to beNone.To further investigate this issue, you could add some debug logs in the
generate_query()method and before thepredict()method is called to check the value of thepromptobject. This could help identify where thepromptobject is being set toNone.I hope this helps in identifying the issue. If you have any further questions or need more assistance, feel free to ask.
Sources
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
If you can’t work with the legacy imports, you can try to define your own
graph_query_synthesis_prompt, based on the legacy prompts defined in the legacy knowledge graph package (llama_index\legacy\query_engine\knowledge_graph_query_engine.py)DEFAULT_NEBULAGRAPH_NL2CYPHER_PROMPTorDEFAULT_NEO4J_NL2CYPHER_PROMPT.If you’re not using OpenAI, you’d definitely need to override it to suite your LLM. I’m trying to make it work with Ollama & Mistral right now.