llama_index: [Bug]: KnowledgeGraphQueryEngine fails with AttributeError

Bug Description

Both docs/examples/query_engine/knowledge_graph_query_engine.ipynb and docs/examples/query_engine/knowledge_graph_rag_query_engine.ipynb examples are failing due to following error: AttributeError: 'NoneType' object has no attribute 'kwargs' on KnowledgeGraphQueryEngine().query().

Version

llama-index==0.10.7

Steps to Reproduce

First occurrence:

  1. Start running the notebook: docs/examples/query_engine/knowledge_graph_query_engine.ipynb
  2. You will experience AttributeError: 'NoneType' object has no attribute 'kwargs' when running KnowledgeGraphQueryEngine().query() with llama-index==0.10.7.

Second occurrence:

  1. Start running the notebook: docs/examples/query_engine/knowledge_graph_rag_query_engine.ipynb
  2. You will experience WARNING:llama_index.core.indices.knowledge_graph.retrievers:Error in retrieving from nl2graphquery: 'NoneType' object has no attribute 'kwargs' when running query_engine_with_nl2graphquery.query() with llama-index==0.10.7.

Relevant Logs/Tracbacks

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[28], line 1
----> 1 response = query_engine.query(
      2     "Tell me about Peter Quill?",
      3 )
      4 display(Markdown(f"<b>{response}</b>"))

File ~/Experimental/jupyter-ws/.venv/lib/python3.11/site-packages/llama_index/core/base/base_query_engine.py:40, in BaseQueryEngine.query(self, str_or_query_bundle)
     38 if isinstance(str_or_query_bundle, str):
     39     str_or_query_bundle = QueryBundle(str_or_query_bundle)
---> 40 return self._query(str_or_query_bundle)

File ~/Experimental/jupyter-ws/.venv/lib/python3.11/site-packages/llama_index/core/query_engine/knowledge_graph_query_engine.py:199, in KnowledgeGraphQueryEngine._query(self, query_bundle)
    195 """Query the graph store."""
    196 with self.callback_manager.event(
    197     CBEventType.QUERY, payload={EventPayload.QUERY_STR: query_bundle.query_str}
    198 ) as query_event:
--> 199     nodes: List[NodeWithScore] = self._retrieve(query_bundle)
    201     response = self._response_synthesizer.synthesize(
    202         query=query_bundle,
    203         nodes=nodes,
    204     )
    206     if self._verbose:

File ~/Experimental/jupyter-ws/.venv/lib/python3.11/site-packages/llama_index/core/query_engine/knowledge_graph_query_engine.py:154, in KnowledgeGraphQueryEngine._retrieve(self, query_bundle)
    152 def _retrieve(self, query_bundle: QueryBundle) -> List[NodeWithScore]:
    153     """Get nodes for response."""
--> 154     graph_store_query = self.generate_query(query_bundle.query_str)
    155     if self._verbose:
    156         print_text(f"Graph Store Query:\n{graph_store_query}\n", color="yellow")

File ~/Experimental/jupyter-ws/.venv/lib/python3.11/site-packages/llama_index/core/query_engine/knowledge_graph_query_engine.py:132, in KnowledgeGraphQueryEngine.generate_query(self, query_str)
    129 """Generate a Graph Store Query from a query bundle."""
    130 # Get the query engine query string
--> 132 graph_store_query: str = self._llm.predict(
    133     self._graph_query_synthesis_prompt,
    134     query_str=query_str,
    135     schema=self._graph_schema,
    136 )
    138 return graph_store_query

File ~/Experimental/jupyter-ws/.venv/lib/python3.11/site-packages/llama_index/core/llms/llm.py:249, in LLM.predict(self, prompt, **prompt_args)
    243 def predict(
    244     self,
    245     prompt: BasePromptTemplate,
    246     **prompt_args: Any,
    247 ) -> str:
    248     """Predict."""
--> 249     self._log_template_data(prompt, **prompt_args)
    251     if self.metadata.is_chat_model:
    252         messages = self._get_messages(prompt, **prompt_args)

File ~/Experimental/jupyter-ws/.venv/lib/python3.11/site-packages/llama_index/core/llms/llm.py:170, in LLM._log_template_data(self, prompt, **prompt_args)
    165 def _log_template_data(
    166     self, prompt: BasePromptTemplate, **prompt_args: Any
    167 ) -> None:
    168     template_vars = {
    169         k: v
--> 170         for k, v in ChainMap(prompt.kwargs, prompt_args).items()
    171         if k in prompt.template_vars
    172     }
    173     with self.callback_manager.event(
    174         CBEventType.TEMPLATING,
    175         payload={
   (...)
    180         },
    181     ):
    182         pass

AttributeError: 'NoneType' object has no attribute 'kwargs'

About this issue

  • Original URL
  • State: open
  • Created 4 months ago
  • Reactions: 4
  • Comments: 16

Most upvoted comments

🤖

Hey @eercanayar, great to see you back! Hope you’ve been well.

Thank you for reporting this issue. It seems like the error is originating from the LLM.predict() method, specifically when it tries to log the template data. The error message AttributeError: 'NoneType' object has no attribute 'kwargs' suggests that the prompt object passed to the predict() method is None.

The predict() method is part of the LLMPredictor class and is used to predict the answer to a query. It takes a prompt of type BasePromptTemplate and optional output_cls of type BaseModel along with other arguments. The method first logs the template data and then proceeds to generate a response based on whether the LLM is a chat model or not.

The prompt object is expected to have a kwargs attribute, which is used in the _log_template_data() method. If the prompt object is None, it would not have the kwargs attribute, leading to the AttributeError you’re seeing.

The KnowledgeGraphQueryEngine()._query() method, which is where the LLM.predict() method is called, is used to query the graph store. It retrieves nodes for the response and synthesizes a response from the retrieved nodes. The prompt object is likely generated in the generate_query() method of the KnowledgeGraphQueryEngine class.

Given this, there are a few things that could be causing this issue:

  1. The prompt object is not being correctly generated in the generate_query() method.
  2. The prompt object is being overwritten or set to None somewhere before the predict() method is called.
  3. There is a bug in the LLM.predict() method that is causing the prompt object to be None.

To further investigate this issue, you could add some debug logs in the generate_query() method and before the predict() method is called to check the value of the prompt object. This could help identify where the prompt object is being set to None.

I hope this helps in identifying the issue. If you have any further questions or need more assistance, feel free to ask.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

If you can’t work with the legacy imports, you can try to define your own graph_query_synthesis_prompt, based on the legacy prompts defined in the legacy knowledge graph package (llama_index\legacy\query_engine\knowledge_graph_query_engine.py) DEFAULT_NEBULAGRAPH_NL2CYPHER_PROMPT or DEFAULT_NEO4J_NL2CYPHER_PROMPT.

If you’re not using OpenAI, you’d definitely need to override it to suite your LLM. I’m trying to make it work with Ollama & Mistral right now.