llama_index: [Bug]: cannot import name 'LLM' from 'llama_index.core.llms'

Bug Description

When trying to import via

from llama_index.core.indices import VectorStoreIndex
from llama_index.core.readers import SimpleDirectoryReader
from llama_index.core.storage import StorageContext

Version

0.10.11

Steps to Reproduce

install llama index in a new venv with poetry.

I delted both .venv and poetry.lock - the error persists.

Relevant Logs/Tracbacks

ImportError                               Traceback (most recent call last)
Cell In[8], line 1
----> 1 from llama_index.core.indices import VectorStoreIndex
      2 from llama_index.core.readers import SimpleDirectoryReader
      3 from llama_index.core.storage import StorageContext

File ~/dev2//.venv/lib/python3.10/site-packages/llama_index/core/indices/__init__.py:4
      1 """LlamaIndex data structures."""
      3 # indices
----> 4 from llama_index.core.indices.composability.graph import ComposableGraph
      5 from llama_index.core.indices.document_summary import (
      6     DocumentSummaryIndex,
      7     GPTDocumentSummaryIndex,
      8 )
      9 from llama_index.core.indices.document_summary.base import DocumentSummaryIndex

File ~/dev2//.venv/lib/python3.10/site-packages/llama_index/core/indices/composability/__init__.py:4
      1 """This module contains all classes used for composing graphs over indices."""
----> 4 from llama_index.core.indices.composability.graph import ComposableGraph
      6 __all__ = ["ComposableGraph"]

File ~/dev2//.venv/lib/python3.10/site-packages/llama_index/core/indices/composability/graph.py:7
      5 from llama_index.core.base.base_query_engine import BaseQueryEngine
      6 from llama_index.core.data_structs.data_structs import IndexStruct
----> 7 from llama_index.core.indices.base import BaseIndex
      8 from llama_index.core.schema import (
      9     IndexNode,
     10     NodeRelationship,
     11     ObjectType,
     12     RelatedNodeInfo,
     13 )
     14 from llama_index.core.service_context import ServiceContext

File ~/dev2//.venv/lib/python3.10/site-packages/llama_index/core/indices/base.py:8
      5 from typing import Any, Dict, Generic, List, Optional, Sequence, Type, TypeVar
      7 from llama_index.core.base.base_query_engine import BaseQueryEngine
----> 8 from llama_index.core.base.base_retriever import BaseRetriever
      9 from llama_index.core.callbacks.base import CallbackManager
     10 from llama_index.core.chat_engine.types import BaseChatEngine, ChatMode

File ~/dev2//.venv/lib/python3.10/site-packages/llama_index/core/base/base_retriever.py:30
     21 from llama_index.core.schema import (
     22     BaseNode,
     23     IndexNode,
   (...)
     27     TextNode,
     28 )
     29 from llama_index.core.service_context import ServiceContext
---> 30 from llama_index.core.settings import Settings
     31 from llama_index.core.utils import print_text
     34 class BaseRetriever(ChainableMixin, PromptMixin):

File ~/dev2//.venv/lib/python3.10/site-packages/llama_index/core/settings.py:12
     10 from llama_index.core.embeddings.utils import EmbedType, resolve_embed_model
     11 from llama_index.core.indices.prompt_helper import PromptHelper
---> 12 from llama_index.core.llms import LLM
     13 from llama_index.core.llms.utils import LLMType, resolve_llm
     14 from llama_index.core.node_parser import NodeParser, SentenceSplitter

ImportError: cannot import name 'LLM' from 'llama_index.core.llms' (unknown location)

About this issue

  • Original URL
  • State: closed
  • Created 4 months ago
  • Reactions: 1
  • Comments: 21 (3 by maintainers)

Most upvoted comments

🤖

Based on the information you’ve provided and the context from the LlamaIndex repository, it seems like the ImportError you’re experiencing might be due to a missing or incorrectly installed package.

The ‘LLM’ class is indeed present in the ‘llama_index.core.llms’ package in the LlamaIndex v0.10.11 codebase. However, it appears that this class is dependent on the ‘vllm’ package, which might not be installed in your environment. This is suggested by the following code block in the ‘llama_index/llms/vllm/base.py’ file:

try:
    from vllm import LLM as VLLModel
except ImportError:
    raise ImportError(
        "Could not import vllm python package. "
        "Please install it with `pip install vllm`."
    )

To resolve this issue, you should install the ‘vllm’ package in your environment. You can do this by running the following command in your terminal:

pip install vllm

If you’re using a virtual environment, make sure to activate it before running the command.

Additionally, based on a similar solved issue in the LlamaIndex repository, you might also need to update the necessary packages with the following command:

pip install -U llama-index-core llama-index-llms-openai llama-index-embeddings-openai

Remember to restart your runtime for the environment changes to take effect after the upgrade.

If the issue persists after these steps, it could be due to a version conflict or other dependency issues. In that case, creating a new virtual environment and installing the dependencies from scratch might be necessary.

I hope this helps! If you have any further issues, please don’t hesitate to ask.

Sources:

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

Facing the same issue.

Premise: I do not have llama index installed globally.

❯ python --version
Python 3.11.4
❯ pip show llama-index
WARNING: Package(s) not found: llama-index

I am working on a project with a venv.

❯ python --version
Python 3.11.4
❯ poetry show llama-index
 name         : llama-index                          
 version      : 0.10.9                               
 description  : Interface between LLMs and your data 

dependencies
 - llama-index-agent-openai >=0.1.0,<0.2.0
 - llama-index-core >=0.10.8.post1,<0.11.0
 - llama-index-embeddings-openai >=0.1.0,<0.2.0
 - llama-index-indices-managed-llama-cloud >=0.1.0,<0.2.0
 - llama-index-legacy >=0.9.48,<0.10.0
 - llama-index-llms-openai >=0.1.0,<0.2.0
 - llama-index-multi-modal-llms-openai >=0.1.0,<0.2.0
 - llama-index-program-openai >=0.1.0,<0.2.0
 - llama-index-question-gen-openai >=0.1.0,<0.2.0
 - llama-index-readers-file >=0.1.0,<0.2.0
 - llama-index-readers-llama-parse >=0.1.0,<0.2.0

Additionally I have the llama-index qdrant dependency installed.

❯ poetry show llama-index-vector-stores-qdrant
 name         : llama-index-vector-stores-qdrant             
 version      : 0.1.3                                        
 description  : llama-index vector_stores qdrant integration 

dependencies
 - grpcio >=1.60.0,<2.0.0
 - llama-index-core >=0.10.1,<0.11.0
 - qdrant-client >=1.7.1,<2.0.0

This is the error:

    from llama_index.core.llms import LLM
ImportError: cannot import name 'LLM' from 'llama_index.core.llms' (unknown location)

Observations:

  1. In file llama_index/core/settings.py", I changed the import of this from llama_index.core.llms import LLM to from llama_index.core.llms.llm import LLM.
  2. Upon further inspection of the package itself, I saw no __init__.py file. So there are no package level exports. Adding that should fix the bug, I guess?

@jphme just to be extremely certain…

In a fresh terminal

pip uninstall llama-index  # remove any possible global install
python -m venv venv
source venv/bin/activate
pip install llama-index --upgrade --no-cache-dir --force-reinstall

@logan-markewich Using pip to force a re-install worked. The issue was not with the venv but with poetry’s global cache. Somehow the cache had a corrupted llama package. Thank you!

from llama_index.core.storage import StorageContext

@jphme do you think you could try maybe wiping the installed packages first instead of recreating the venv?

source venv/bin/activate
pip freeze | xargs pip uninstall -y
pip install llama-index

This worked. But probably I got it wrong before; after re-checking, the import that failed after setting up new venv (as I wrote above) was from llama_index.core.base.llms import LLM and not from llama_index.core.llms import LLM.
Huge sorry @logan-markewich, this was indeed my fault. (But it probably would make sense to add these uninstall /venv reset instructions to the docs anyway).

@jphme do you think you could try maybe wiping the installed packages first instead of recreating the venv?

source venv/bin/activate
pip freeze | xargs pip uninstall -y
pip install llama-index

But there is an init…

Here is the init in llama_index.core.llms

https://github.com/run-llama/llama_index/blob/b285c6f8bf640266357befcc3654f394738376f7/llama-index-core/llama_index/core/llms/__init__.py#L13

And here is the class it’s importing https://github.com/run-llama/llama_index/blob/b285c6f8bf640266357befcc3654f394738376f7/llama-index-core/llama_index/core/llms/llm.py#L117

All paths are correct.

Installing from a fresh venv would likely resolve your issues, using the commands I posted above @a-bhattacharya

i was not changing my conda env on pycharm, now its work. thanks @logan-markewich