llama_index: [Bug]: cannot import name 'LLM' from 'llama_index.core.llms'
Bug Description
When trying to import via
from llama_index.core.indices import VectorStoreIndex
from llama_index.core.readers import SimpleDirectoryReader
from llama_index.core.storage import StorageContext
Version
0.10.11
Steps to Reproduce
install llama index in a new venv with poetry.
I delted both .venv and poetry.lock - the error persists.
Relevant Logs/Tracbacks
ImportError Traceback (most recent call last)
Cell In[8], line 1
----> 1 from llama_index.core.indices import VectorStoreIndex
2 from llama_index.core.readers import SimpleDirectoryReader
3 from llama_index.core.storage import StorageContext
File ~/dev2//.venv/lib/python3.10/site-packages/llama_index/core/indices/__init__.py:4
1 """LlamaIndex data structures."""
3 # indices
----> 4 from llama_index.core.indices.composability.graph import ComposableGraph
5 from llama_index.core.indices.document_summary import (
6 DocumentSummaryIndex,
7 GPTDocumentSummaryIndex,
8 )
9 from llama_index.core.indices.document_summary.base import DocumentSummaryIndex
File ~/dev2//.venv/lib/python3.10/site-packages/llama_index/core/indices/composability/__init__.py:4
1 """This module contains all classes used for composing graphs over indices."""
----> 4 from llama_index.core.indices.composability.graph import ComposableGraph
6 __all__ = ["ComposableGraph"]
File ~/dev2//.venv/lib/python3.10/site-packages/llama_index/core/indices/composability/graph.py:7
5 from llama_index.core.base.base_query_engine import BaseQueryEngine
6 from llama_index.core.data_structs.data_structs import IndexStruct
----> 7 from llama_index.core.indices.base import BaseIndex
8 from llama_index.core.schema import (
9 IndexNode,
10 NodeRelationship,
11 ObjectType,
12 RelatedNodeInfo,
13 )
14 from llama_index.core.service_context import ServiceContext
File ~/dev2//.venv/lib/python3.10/site-packages/llama_index/core/indices/base.py:8
5 from typing import Any, Dict, Generic, List, Optional, Sequence, Type, TypeVar
7 from llama_index.core.base.base_query_engine import BaseQueryEngine
----> 8 from llama_index.core.base.base_retriever import BaseRetriever
9 from llama_index.core.callbacks.base import CallbackManager
10 from llama_index.core.chat_engine.types import BaseChatEngine, ChatMode
File ~/dev2//.venv/lib/python3.10/site-packages/llama_index/core/base/base_retriever.py:30
21 from llama_index.core.schema import (
22 BaseNode,
23 IndexNode,
(...)
27 TextNode,
28 )
29 from llama_index.core.service_context import ServiceContext
---> 30 from llama_index.core.settings import Settings
31 from llama_index.core.utils import print_text
34 class BaseRetriever(ChainableMixin, PromptMixin):
File ~/dev2//.venv/lib/python3.10/site-packages/llama_index/core/settings.py:12
10 from llama_index.core.embeddings.utils import EmbedType, resolve_embed_model
11 from llama_index.core.indices.prompt_helper import PromptHelper
---> 12 from llama_index.core.llms import LLM
13 from llama_index.core.llms.utils import LLMType, resolve_llm
14 from llama_index.core.node_parser import NodeParser, SentenceSplitter
ImportError: cannot import name 'LLM' from 'llama_index.core.llms' (unknown location)
About this issue
- Original URL
- State: closed
- Created 4 months ago
- Reactions: 1
- Comments: 21 (3 by maintainers)
đ¤
Based on the information youâve provided and the context from the LlamaIndex repository, it seems like the ImportError youâre experiencing might be due to a missing or incorrectly installed package.
The âLLMâ class is indeed present in the âllama_index.core.llmsâ package in the LlamaIndex v0.10.11 codebase. However, it appears that this class is dependent on the âvllmâ package, which might not be installed in your environment. This is suggested by the following code block in the âllama_index/llms/vllm/base.pyâ file:
To resolve this issue, you should install the âvllmâ package in your environment. You can do this by running the following command in your terminal:
If youâre using a virtual environment, make sure to activate it before running the command.
Additionally, based on a similar solved issue in the LlamaIndex repository, you might also need to update the necessary packages with the following command:
Remember to restart your runtime for the environment changes to take effect after the upgrade.
If the issue persists after these steps, it could be due to a version conflict or other dependency issues. In that case, creating a new virtual environment and installing the dependencies from scratch might be necessary.
I hope this helps! If you have any further issues, please donât hesitate to ask.
Sources:
Sources
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a đ if this is helpful and đ if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Facing the same issue.
Premise: I do not have llama index installed globally.
I am working on a project with a venv.
Additionally I have the llama-index qdrant dependency installed.
This is the error:
Observations:
llama_index/core/settings.py", I changed the import of thisfrom llama_index.core.llms import LLMtofrom llama_index.core.llms.llm import LLM.__init__.pyfile. So there are no package level exports. Adding that should fix the bug, I guess?@jphme just to be extremely certainâŚ
In a fresh terminal
@logan-markewich Using pip to force a re-install worked. The issue was not with the venv but with poetryâs global cache. Somehow the cache had a corrupted llama package. Thank you!
This worked. But probably I got it wrong before; after re-checking, the import that failed after setting up new venv (as I wrote above) was
from llama_index.core.base.llms import LLMand notfrom llama_index.core.llms import LLM.Huge sorry @logan-markewich, this was indeed my fault. (But it probably would make sense to add these uninstall /venv reset instructions to the docs anyway).
@jphme do you think you could try maybe wiping the installed packages first instead of recreating the venv?
But there is an initâŚ
Here is the init in llama_index.core.llms
https://github.com/run-llama/llama_index/blob/b285c6f8bf640266357befcc3654f394738376f7/llama-index-core/llama_index/core/llms/__init__.py#L13
And here is the class itâs importing https://github.com/run-llama/llama_index/blob/b285c6f8bf640266357befcc3654f394738376f7/llama-index-core/llama_index/core/llms/llm.py#L117
All paths are correct.
Installing from a fresh venv would likely resolve your issues, using the commands I posted above @a-bhattacharya
i was not changing my conda env on pycharm, now its work. thanks @logan-markewich