gpt4all: Cannot get gpt4all Python Bindings to install or run properly on Windows 11, Python 3.9.
I’m a complete beginner, so apologies if I’m missing something obvious. I’m trying to follow the README here https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-bindings/python/README.md
I install gpt4all with pip perfectly fine. Then I installed the Cygwin64 Terminal and ran the lines in the tutorial. Everything goes well until “cmake --build . --parallel”. This is what I get:
$ cmake --build . --parallel
MSBuild version 17.5.1+f6fdcf537 for .NET Framework
Checking Build System
ggml.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\llama.cp
p\ggml.dir\Debug\ggml.lib
Auto build dll exports
llama.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Deb
ug\llama.dll
common.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\llama.
cpp\examples\common.dir\Debug\common.lib
Building Custom Rule C:/cygwin64/home/USER/gpt4all/gpt4all-backend/CMakeList
s.txt
quantize-stats.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\buil
d\bin\Debug\quantize-stats.exe
main.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Debu
g\main.exe
Microsoft (R) C/C++ Optimizer Version 19.35.32217.1 for x64
gptj.cpp
Copyright (C) Microsoft Corporation. All rights reserved.
cl /c /I"C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build" /I"C:\cygwin64
\home\USER\gpt4all\gpt4all-backend\llama.cpp." /Zi /W1 /WX- /diagnostics:co
lumn /Od /Ob0 /D _WINDLL /D _MBCS /D WIN32 /D _WINDOWS /D "CMAKE_INTDIR="Deb
ug"" /D llmodel_EXPORTS /Gm- /EHsc /RTC1 /MDd /GS /fp:precise /Zc:wchar_t /Z
c:forScope /Zc:inline /GR /Fo"llmodel.dir\Debug" /Fd"llmodel.dir\Debug\vc14
3.pdb" /external:W1 /Gd /TP /errorReport:queue "C:\cygwin64\home\USER\gpt4al
l\gpt4all-backend\gptj.cpp" "C:\cygwin64\home\USER\gpt4all\gpt4all-backend\m
pt.cpp"
save-load-state.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Debug\save-load-state.exe
vdot.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Debu
g\vdot.exe
embedding.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin
\Debug\embedding.exe
q8dot.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Deb
ug\q8dot.exe
perplexity.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bi
n\Debug\perplexity.exe
quantize.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin
Debug\quantize.exe
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(280,13): error C7555: u
se of designated initializers requires at least '/std:c++20' [C:\cygwin64\home
\USER\gpt4all\gpt4all-backend\build\llmodel.vcxproj]
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(418,33): warning C4477:
'fprintf' : format string '%lu' requires an argument of type 'unsigned long',
but variadic argument 3 has type 'int64_t' [C:\cygwin64\home\USER\gpt4all\gpt4
all-backend\build\llmodel.vcxproj]
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(418,33): message : cons
ider using '%llu' in the format string [C:\cygwin64\home\USER\gpt4all\gpt4all-
backend\build\llmodel.vcxproj]
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(418,33): message : cons
ider using '%Iu' in the format string [C:\cygwin64\home\USER\gpt4all\gpt4all-b
ackend\build\llmodel.vcxproj]
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(418,33): message : cons
ider using '%I64u' in the format string [C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\llmodel.vcxproj]
From what I can see, some type of error is happening. Especially because “libllmodel.*” does not exist in “gpt4all-backend/build”.
If I continue the tutorial anyway and try to run the python code. Then “pyllmodel.py” opens on Visual Studio and I get the following error:
Exception has occurred: FileNotFoundError
Could not find module 'c:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\llmodel_DO_NOT_MODIFY\build\libllama.dll' (or one of its dependencies). Try using the full path with constructor syntax.
File "C:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\pyllmodel.py", line 49, in load_llmodel_library
llama_lib = ctypes.CDLL(llama_dir, mode=ctypes.RTLD_GLOBAL)
File "C:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\pyllmodel.py", line 55, in <module>
llmodel, llama = load_llmodel_library()
File "C:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\__init__.py", line 1, in <module>
from .pyllmodel import LLModel # noqa
File "C:\Users\USER\Desktop\bigmantest.py", line 1, in <module>
from gpt4all import GPT4All
FileNotFoundError: Could not find module 'c:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\llmodel_DO_NOT_MODIFY\build\libllama.dll' (or one of its dependencies). Try using the full path with constructor syntax.
Not sure if it’s a bug, or I’m missing something, but any help would be appreciated. Reminder that I’m a beginner, so hoping for not too much technical jargon that might be difficult for me to understand. Thanks!
Information
- The official example notebooks/scripts
- My own modified scripts
Related Components
- backend
- bindings
- python-bindings
- chat-ui
- models
- circleci
- docker
- api
Reproduction
Simply following the README on Windows 11, Python 3.9. Nothing special.
Expected behavior
For the example python script to successfully output the result to “Name 3 colors” after downloading “ggml-gpt4all-j-v1.3-groovy”.
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 56
Followed the discussion and installed gpt4all. But when running below snippet -
Getting below error -
Any idea on how to make it work with GPU?
Merge is likely as soon as tomorrow. Not sure about the release yet.
I see, interesting, I understand now. I appreciate the help once again!
@gavtography that’s great to hear you got it to work after all. (pinging in hopes you’ll see it even after closing, sorry for that)
I do still wonder how it got into a state where it’d produce these kinds of errors. Pretty unlucky – you kind of got thrown into the deep end of the pool on that.
On the bright side, you now know quite a bit more about the Python <-> native interaction on Windows. You should from now on technically be able to interface with any native DLL that exposes a C interface, once you look into ctypes a bit more.
Getting things to work was the first priority, here are a few more thoughts:
clean-up:
If you copied any DLLs into places while trying to figure things out and they are not required anymore for it to work (esp. into System32), remove them again. You never know when you might run into subtle versioning problems at some later point.
Similar goes for Python’s packages. If the
gpt4all
package or one of its dependencies is now installed where you don’t need it (I mean a different interpreter), remove it again.Myself, I’ve made it a habit to pretty much always work with virtual environments for my own stuff. It’s definitely worth looking into that (if you’re not familiar already, of course).
tools:
for troubleshooting these kinds of things, some of GNU tools are neat (such as the mentioned
ldd
), but there are also some Windows tools I can recommend. Especially: Windows Sysinternals and Dependency Walker – both are free to use.I’m assuming that in the future, the Windows parts will rely solely on the MSVC compiler. And you already had Cygwin, so MSYS2 is kind of obsolete now (but keep it if you like it). With that, I’d also recommend to then use the “normal” Python installation on Windows – at least once the devs publish a working bindings package to PyPI (there already is one now, but it doesn’t have the DLLs). It’s just the “path more well-treaded”.
You’re welcome. I was thinking about maybe compressing it into some kind of guide once it’s resolved, but looks like that soon won’t be needed anymore, anyway. However, you’re likely not the only one to run into (at least some of) these problems, so as detailed as this exchange was, it probably helps some others, too. (just have to trust the search engines 😉)
Funnily enough, the solution was simply uninstalling gpt4all using pip and reinstalling it. Of course, this only worked after I followed all your instructions to get this working properly. Sweet!
Definitely a large string of issues that led to this point though. Ton of troubleshooting, but can confirm it technically is working with my setup, so I suppose that’s good enough to close the thread because the title is no longer true.
I appreciate the time and effort to getting to the bottom of this.
Alright, so, remember when I said:
I’ve tested it now with the new
llama.dll
andllmodel.dll
and got it to work. In one way it’s even simpler (at least on my system), but it requires editing the Python bindings. I’m not sure if you want to attempt that, but here are some instructions anyway. These instructions rely on existing DLLs from the chat application, not self-compiled ones:You might want to do aNow to avoid messing with existing things a lot, here’s a (maybe new) concept: Python virtual environments.git pull
and clean your repository, but that’s optional, the Python binding code hasn’t changed (yet).First of all, you would decide which Python interpreter you’re going to use. There’s a minor difference between the Windows installed Python and the MSYS2 Python. Windows installed Python typically creates a
Scripts/
folder in the virtual environment, otherwise you typically have abin/
folder for the native programs and libraries. To keep it simple, the following instructions here are again for when you’re doing everything in a MinGW console (and not using the Windows installed Python):Now copy the
llama.dll
andllmodel.dll
from your chat application into thevenv/bin
folder, so they’re right next to your virtual environment’s python.exe. Then here is where it might be simpler with these DLLs. Your system should already know where the dependencies are. To check, run:If you don’t see a
=> not found
in the output of these two commands, you’re good to go.Now in order to make the Python bindings use these and not the ones you’d otherwise compile, you need to edit the
pyllmodel.py
file of the Python bindings, it’s in<path-to-repository>/gpt4all-bindings/python/gpt4all/pyllmodel.py
. There are two things to do:pkg_resources
to find the libraries (they’re already next to the Python interpreter)So comment out the
pkg_resources
lines and simply put in the right DLL names where they’re loaded withctypes.CDLL(...)
. Also comment out the next twollama_dir/llmodel_dir = ...
because they’re referencing themselves but aren’t set:It then looks like this:
Now with all that done – and hopefully I didn’t make any mistakes when writing the instructions – you should be able to run an example from within that console.
To target a virtual environment from within Visual Studio, point it to virtual environment’s python.exe. It should know how to handle those.