vllm: ImportError: /ramyapra/vllm/vllm/_C.cpython-310-x86_64-linux-gnu.so: undefined symbol:

I’m trying to run vllm and lm-eval-harness. I’m using vllm 0.2.5. After I’m done installing both, if I try importing vllm I get the following error: File "/ramyapra/lm-evaluation-harness/lm_eval/models/__init__.py", line 7, in <module> from . import vllm_causallms File "/ramyapra/lm-evaluation-harness/lm_eval/models/vllm_causallms.py", line 16, in <module> from vllm import LLM, SamplingParams File "/ramyapra/vllm/vllm/__init__.py", line 3, in <module> from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs File "/ramyapra/vllm/vllm/engine/arg_utils.py", line 6, in <module> from vllm.config import (CacheConfig, ModelConfig, ParallelConfig, File "/ramyapra/vllm/vllm/config.py", line 9, in <module> from vllm.utils import get_cpu_memory, is_hip File "/ramyapra/vllm/vllm/utils.py", line 8, in <module> from vllm._C import cuda_utils ImportError: /ramyapra/vllm/vllm/_C.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN2at4_ops19empty_memory_format4callEN3c108ArrayRefINS2_6SymIntEEESt8optionalINS2_10ScalarTypeEES6_INS2_6LayoutEES6_INS2_6DeviceEES6_IbES6_INS2_12MemoryFormatEE

I’m using the NGC docker container 23:10-py3.

About this issue

  • Original URL
  • State: closed
  • Created 5 months ago
  • Reactions: 3
  • Comments: 38

Most upvoted comments

same issue. anyone able to fix?

cuda: 12.0.1 torch : 2.2.1 transformers: 4.38.2 vllm: 0.3.2 accelerate: 0.22.0

UPDATE: solved by downgrading torch to 2.1.2

when build from source, pytorch 2.2 cuda 12.1 :

    from vllm._C import ops
ImportError: /workspace/vllm/vllm/_C.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN2at4_ops15to_dtype_layout4callERKNS_6TensorEN3c108optionalINS5_10ScalarTypeEEENS6_INS5_6LayoutEEENS6_INS5_6DeviceEEENS6_IbEEbbNS6_INS5_12MemoryFormatEEE

this is miss the ref : at::_ops::to_dtype_layout::call(at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, bool, bool, c10::optional<c10::MemoryFormat>) so is there somewhere to use this ops?

I found a solution!!! I just followed https://docs.vllm.ai/en/latest/getting_started/installation.html

pip install vllm worked right out of the box!!