mlc-llm: [Bug] Could not convert TVM object of type runtime.Closure to a string.

πŸ› Bug

To Reproduce

Steps to reproduce the behavior:

Calling chat.reload(self.modelLib, modelPath: modelPath, appConfigJson: "")

produces the following crash:

Check failed: (IsObjectRef<tvm::runtime::String>()) is false: Could not convert TVM object of type runtime.Closure to a string.
Stack trace:
0x000000010009ecc4 tvm::runtime::detail::LogFatal::Entry::Finalize() + 68
0x000000010009ec80 tvm::runtime::detail::LogFatal::Entry::Finalize() + 0
0x000000010009dcf4 __clang_call_terminate + 0
0x00000001000acbfc tvm::runtime::TVMArgValue::operator std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>() const + 716
0x00000001000ac318 tvm::runtime::PackedFuncValueConverter<tvm::runtime::String>::From(tvm::runtime::TVMArgValue const&) + 104
0x00000001000b5928 mlc::llm::LLMChatModule::GetFunction(tvm::runtime::String const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::'lambda'(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const + 724
0x00000001000b5648 tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<mlc::llm::LLMChatModule::GetFunction(tvm::runtime::String const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::'lambda'(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>>::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) + 40
tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::runtime::Module&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>&>(tvm::runtime::Module&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>&) const + 260
-[ChatModule reload:modelPath:appConfigJson:] + 408

Environment

  • Platform (e.g. WebGPU/Vulkan/IOS/Android/CUDA):iOS
  • Operating system (e.g. Ubuntu/Windows/MacOS/…):MacOS
  • Device (e.g. iPhone 12 Pro, PC+RTX 3090, …) iPhone14 Pro
  • How you installed MLC-LLM (conda, source): source
  • How you installed TVM-Unity (pip, source): pip
  • Python version (e.g. 3.10): 3.8
  • GPU driver version (if applicable):
  • CUDA/cuDNN version (if applicable):
  • TVM Unity Hash Tag (python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))", applicable if you compile models):
  • Any other relevant information:

Additional context

About this issue

  • Original URL
  • State: closed
  • Created 10 months ago
  • Comments: 17 (4 by maintainers)

Most upvoted comments

Hi @junrushao ,

Thanks for the confirmation. Is there any way we can get the previous version of wheel file? The latest file on the mlc/wheels are not working anymore as mentioned by @scottorly.

These two files are the ones we used that were working. Downloading https://github.com/mlc-ai/package/releases/download/v0.9.dev0/mlc_ai_nightly_cu121-0.12.dev1569-cp311-cp311-manylinux_2_28_x86_64.whl (98.4 MB) Downloading https://github.com/mlc-ai/package/releases/download/v0.9.dev0/mlc_chat_nightly_cu121-0.1.dev413-cp311-cp311-manylinux_2_28_x86_64.whl (21.1 MB)

Yeah, with the below commands, it works for me(*):

pip install --pre --force-reinstall mlc-ai-nightly-cu118 mlc-chat-nightly-cu118 -f https://mlc.ai/wheels
pip uninstall mlc-chat-nightly-rocm
pip install --pre mlc-chat-nightly-rocm -f https://github.com/mlc-ai/package/releases/download/v0.9.dev0/mlc_chat_nightly_rocm-0.1.dev421-cp310-cp310-manylinux_2_28_x86_64.whl
  • Then I had to update my prebuilt model weights to fix issue 913.
  • Then I still hit issue 727, but that should be unrelated and expected for my GPU. I won’t be able to use the prebuilt weights.

Hi @scottorly @changun @leavelet, we just updated the wheels with the latest fix. Could you please update the pip package with the same pip install command in https://mlc.ai/package/ and retry?

Encountered the exact problem on both ROCm and CUDA platforms