gpt4all: [Commit 3acbef14b7c2436fe033cae9036e695d77461a16] LLModel ERROR: Could not find CPU LLaMA implementation

System Info

GPT4All 2.5.5 (commit 7aa0f779def71b41c3b909f0f232ee051f5603ce) Windows 10 Qt_6_6_0_MSVC2019_64bit-Release

Information

  • The official example notebooks/scripts
  • My own modified scripts

Reproduction

  1. Open QT Creator
  2. Build
  3. Start GPT4All from within QT Creator

image

23:03:04: Starting C:\Prog\Development\GPT4All_Thilo_selfcompiled\gpt4all\build-gpt4all-chat-Desktop_Qt_6_6_0_MSVC2019_64bit-Release\bin\chat.exe...
LLModel ERROR: Could not find CPU LLaMA implementation
[Debug] (Sat Dec 16 23:03:10 2023): deserializing chat "C:/Prog/Other/GPT4All-LanguageModels//gpt4all-d62e37fe-fa7f-44be-9c99-aa61440af58c.chat"
[Debug] (Sat Dec 16 23:03:10 2023): deserializing chat "C:/Prog/Other/GPT4All-LanguageModels//gpt4all-a889ea46-3abb-4320-a444-d3913033810f.chat"
[Debug] (Sat Dec 16 23:03:10 2023): deserializing chat "C:/Prog/Other/GPT4All-LanguageModels//gpt4all-65cc4e4f-decc-45f0-887f-08d6f1afec83.chat"
[Debug] (Sat Dec 16 23:03:10 2023): deserializing chat "C:/Prog/Other/GPT4All-LanguageModels//gpt4all-28249627-42e2-43af-a089-c20069caaaa4.chat"
LLModel ERROR: Could not find any implementations for build variant: default
[Debug] (Sat Dec 16 23:03:10 2023): deserializing chat "C:/Prog/Other/GPT4All-LanguageModels//gpt4all-4eb5e41d-1e3a-4460-b32e-904467930127.chat"
[Debug] (Sat Dec 16 23:03:10 2023): deserializing chat "C:/Prog/Other/GPT4All-LanguageModels//gpt4all-627f724e-d0c1-4887-afb8-3d6ba2b25ba2.chat"
[Debug] (Sat Dec 16 23:03:10 2023): deserializing chats took: 13 ms
[Warning] (Sat Dec 16 23:03:13 2023): ERROR: Could not load model due to invalid format for mistral-7b-instruct-v0.1.Q4_0.gguf id "1a89f1af-929c-4907-87da-308dfefac00a"
[Debug] (Sat Dec 16 23:03:38 2023): serializing chat "gpt4all-d62e37fe-fa7f-44be-9c99-aa61440af58c.chat"
[Debug] (Sat Dec 16 23:03:38 2023): serializing chat "gpt4all-a889ea46-3abb-4320-a444-d3913033810f.chat"
[Debug] (Sat Dec 16 23:03:38 2023): serializing chat "gpt4all-65cc4e4f-decc-45f0-887f-08d6f1afec83.chat"
[Debug] (Sat Dec 16 23:03:38 2023): serializing chat "gpt4all-28249627-42e2-43af-a089-c20069caaaa4.chat"
[Debug] (Sat Dec 16 23:03:38 2023): serializing chat "gpt4all-4eb5e41d-1e3a-4460-b32e-904467930127.chat"
[Debug] (Sat Dec 16 23:03:38 2023): serializing chat "gpt4all-627f724e-d0c1-4887-afb8-3d6ba2b25ba2.chat"
[Debug] (Sat Dec 16 23:03:38 2023): serializing chats took: 26 ms
23:03:38: C:\Prog\Development\GPT4All_Thilo_selfcompiled\gpt4all\build-gpt4all-chat-Desktop_Qt_6_6_0_MSVC2019_64bit-Release\bin\chat.exe exited with code 0

Expected behavior

  1. No Error.
  2. The model loads

About this issue

  • Original URL
  • State: closed
  • Created 6 months ago
  • Comments: 20 (4 by maintainers)

Commits related to this issue

Most upvoted comments

I also encountered the same issue, after some debugging and looking at the Microsoft docs for LoadLibraryExA (https://learn.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-loadlibraryexa). When using LOAD_LIBRARY_SEARCH_DLL_LOAD_DIR, the lpFileName parameter must specify a fully qualified path, also it needs to be backslashes (\), not forward slashes (/). I solved my issue after modifying https://github.com/nomic-ai/gpt4all/blob/c72c73a94fcf653ecf0c8969a88068dd0e0d416f/gpt4all-backend/llmodel.cpp#L107 and https://github.com/nomic-ai/gpt4all/blob/c72c73a94fcf653ecf0c8969a88068dd0e0d416f/gpt4all-backend/dlhandle.h#L78 to

Dlhandle dl(std::filesystem::absolute(p).string());

and

std::string path = fpath;
std::replace(path.begin(), path.end(), '/', '\\');
chandle = LoadLibraryExA(path.c_str(), NULL, LOAD_LIBRARY_SEARCH_DEFAULT_DIRS | LOAD_LIBRARY_SEARCH_DLL_LOAD_DIR);

Hope this helps!