gpt4all: Cannot get gpt4all Python Bindings to install or run properly on Windows 11, Python 3.9.

I’m a complete beginner, so apologies if I’m missing something obvious. I’m trying to follow the README here https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-bindings/python/README.md

I install gpt4all with pip perfectly fine. Then I installed the Cygwin64 Terminal and ran the lines in the tutorial. Everything goes well until “cmake --build . --parallel”. This is what I get:

$ cmake --build . --parallel
MSBuild version 17.5.1+f6fdcf537 for .NET Framework

Checking Build System
ggml.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\llama.cp
p\ggml.dir\Debug\ggml.lib
Auto build dll exports
llama.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Deb
ug\llama.dll
common.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\llama.
cpp\examples\common.dir\Debug\common.lib
Building Custom Rule C:/cygwin64/home/USER/gpt4all/gpt4all-backend/CMakeList
s.txt
quantize-stats.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\buil
d\bin\Debug\quantize-stats.exe
main.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Debu
g\main.exe
Microsoft (R) C/C++ Optimizer Version 19.35.32217.1 for x64
gptj.cpp
Copyright (C) Microsoft Corporation. All rights reserved.
cl /c /I"C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build" /I"C:\cygwin64
\home\USER\gpt4all\gpt4all-backend\llama.cpp." /Zi /W1 /WX- /diagnostics:co
lumn /Od /Ob0 /D _WINDLL /D _MBCS /D WIN32 /D _WINDOWS /D "CMAKE_INTDIR="Deb
ug"" /D llmodel_EXPORTS /Gm- /EHsc /RTC1 /MDd /GS /fp:precise /Zc:wchar_t /Z
c:forScope /Zc:inline /GR /Fo"llmodel.dir\Debug" /Fd"llmodel.dir\Debug\vc14
3.pdb" /external:W1 /Gd /TP /errorReport:queue "C:\cygwin64\home\USER\gpt4al
l\gpt4all-backend\gptj.cpp" "C:\cygwin64\home\USER\gpt4all\gpt4all-backend\m
pt.cpp"
save-load-state.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Debug\save-load-state.exe
vdot.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Debu
g\vdot.exe
embedding.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin
\Debug\embedding.exe
q8dot.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Deb
ug\q8dot.exe
perplexity.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bi
n\Debug\perplexity.exe
quantize.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin
Debug\quantize.exe
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(280,13): error C7555: u
se of designated initializers requires at least '/std:c++20' [C:\cygwin64\home
\USER\gpt4all\gpt4all-backend\build\llmodel.vcxproj]
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(418,33): warning C4477:
'fprintf' : format string '%lu' requires an argument of type 'unsigned long',
but variadic argument 3 has type 'int64_t' [C:\cygwin64\home\USER\gpt4all\gpt4
all-backend\build\llmodel.vcxproj]
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(418,33): message : cons
ider using '%llu' in the format string [C:\cygwin64\home\USER\gpt4all\gpt4all-
backend\build\llmodel.vcxproj]
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(418,33): message : cons
ider using '%Iu' in the format string [C:\cygwin64\home\USER\gpt4all\gpt4all-b
ackend\build\llmodel.vcxproj]
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(418,33): message : cons
ider using '%I64u' in the format string [C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\llmodel.vcxproj]

From what I can see, some type of error is happening. Especially because “libllmodel.*” does not exist in “gpt4all-backend/build”.

If I continue the tutorial anyway and try to run the python code. Then “pyllmodel.py” opens on Visual Studio and I get the following error:

Exception has occurred: FileNotFoundError
Could not find module 'c:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\llmodel_DO_NOT_MODIFY\build\libllama.dll' (or one of its dependencies). Try using the full path with constructor syntax.
  File "C:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\pyllmodel.py", line 49, in load_llmodel_library
    llama_lib = ctypes.CDLL(llama_dir, mode=ctypes.RTLD_GLOBAL)
  File "C:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\pyllmodel.py", line 55, in <module>
    llmodel, llama = load_llmodel_library()
  File "C:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\__init__.py", line 1, in <module>
    from .pyllmodel import LLModel # noqa
  File "C:\Users\USER\Desktop\bigmantest.py", line 1, in <module>
    from gpt4all import GPT4All
FileNotFoundError: Could not find module 'c:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\llmodel_DO_NOT_MODIFY\build\libllama.dll' (or one of its dependencies). Try using the full path with constructor syntax.

Not sure if it’s a bug, or I’m missing something, but any help would be appreciated. Reminder that I’m a beginner, so hoping for not too much technical jargon that might be difficult for me to understand. Thanks!

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • backend
  • bindings
  • python-bindings
  • chat-ui
  • models
  • circleci
  • docker
  • api

Reproduction

Simply following the README on Windows 11, Python 3.9. Nothing special.

Expected behavior

For the example python script to successfully output the result to “Name 3 colors” after downloading “ggml-gpt4all-j-v1.3-groovy”.

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 56

Most upvoted comments

Followed the discussion and installed gpt4all. But when running below snippet -

model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin", device='gpu')

Getting below error -

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\gpt4all\gpt4all-bindings\python\gpt4all\gpt4all.py", line 97, in __init__
    self.model.init_gpu(model_path=self.config["path"], device=device)
  File "C:\gpt4all\gpt4all-bindings\python\gpt4all\pyllmodel.py", line 237, in init_gpu
    available_gpus = [device.name.decode('utf-8') for device in self.list_gpu(model_path)]
  File "C:\gpt4all\gpt4all-bindings\python\gpt4all\pyllmodel.py", line 216, in list_gpu
    raise ValueError("Unable to retrieve available GPU devices")
ValueError: Unable to retrieve available GPU devices

Any idea on how to make it work with GPU?

Would it be a better idea for me to wait out the merge? is that how soon we’re talking?

Merge is likely as soon as tomorrow. Not sure about the release yet.

I see, interesting, I understand now. I appreciate the help once again!

@gavtography that’s great to hear you got it to work after all. (pinging in hopes you’ll see it even after closing, sorry for that)

I do still wonder how it got into a state where it’d produce these kinds of errors. Pretty unlucky – you kind of got thrown into the deep end of the pool on that.

On the bright side, you now know quite a bit more about the Python <-> native interaction on Windows. You should from now on technically be able to interface with any native DLL that exposes a C interface, once you look into ctypes a bit more.

Getting things to work was the first priority, here are a few more thoughts:

  • clean-up:

    • If you copied any DLLs into places while trying to figure things out and they are not required anymore for it to work (esp. into System32), remove them again. You never know when you might run into subtle versioning problems at some later point.

    • Similar goes for Python’s packages. If the gpt4all package or one of its dependencies is now installed where you don’t need it (I mean a different interpreter), remove it again.

    • Myself, I’ve made it a habit to pretty much always work with virtual environments for my own stuff. It’s definitely worth looking into that (if you’re not familiar already, of course).

  • tools:

    • for troubleshooting these kinds of things, some of GNU tools are neat (such as the mentioned ldd), but there are also some Windows tools I can recommend. Especially: Windows Sysinternals and Dependency Walker – both are free to use.

    • I’m assuming that in the future, the Windows parts will rely solely on the MSVC compiler. And you already had Cygwin, so MSYS2 is kind of obsolete now (but keep it if you like it). With that, I’d also recommend to then use the “normal” Python installation on Windows – at least once the devs publish a working bindings package to PyPI (there already is one now, but it doesn’t have the DLLs). It’s just the “path more well-treaded”.

I appreciate the time and effort to getting to the bottom of this.

You’re welcome. I was thinking about maybe compressing it into some kind of guide once it’s resolved, but looks like that soon won’t be needed anymore, anyway. However, you’re likely not the only one to run into (at least some of) these problems, so as detailed as this exchange was, it probably helps some others, too. (just have to trust the search engines 😉)

Funnily enough, the solution was simply uninstalling gpt4all using pip and reinstalling it. Of course, this only worked after I followed all your instructions to get this working properly. Sweet!

Definitely a large string of issues that led to this point though. Ton of troubleshooting, but can confirm it technically is working with my setup, so I suppose that’s good enough to close the thread because the title is no longer true.

I appreciate the time and effort to getting to the bottom of this.

Are the files renamed possibly? I noticed a lib folder that has llama.dll and llmodel.dll opposed to “libllama.dll” and “libllmodel.dll”

Edit: well I see those files in your screenshot too, so perhaps they’re something different. Pretty stuck then.

Alright, so, remember when I said:

The Python bindings themselves are using ctypes, which means they dynamically interact with the native DLLs, i.e. they’re not compiled against them. So you could use any for that.

I’ve tested it now with the new llama.dll and llmodel.dll and got it to work. In one way it’s even simpler (at least on my system), but it requires editing the Python bindings. I’m not sure if you want to attempt that, but here are some instructions anyway. These instructions rely on existing DLLs from the chat application, not self-compiled ones:

You might want to do a git pull and clean your repository, but that’s optional, the Python binding code hasn’t changed (yet). Now to avoid messing with existing things a lot, here’s a (maybe new) concept: Python virtual environments.

  • virtual environments give you a separate interpreter and its own packages in a directory of choice
  • one primary use case of virtual environments is to avoid dependency conflicts between Python packages themselves
  • Python nowadays has built-in support for virtual environments in form of the venv module (although there are other ways). That module is what will be used in these instructions.

First of all, you would decide which Python interpreter you’re going to use. There’s a minor difference between the Windows installed Python and the MSYS2 Python. Windows installed Python typically creates a Scripts/ folder in the virtual environment, otherwise you typically have a bin/ folder for the native programs and libraries. To keep it simple, the following instructions here are again for when you’re doing everything in a MinGW console (and not using the Windows installed Python):

python -m venv venv  # invoke the venv module, create a venv/ folder which contains the virtual environment
source venv/bin/activate  # this console session now uses the virtual environment
# to check:
type -p python pip  # should print: '<full-path-to>/venv/bin/python' and '<full-path-to>/venv/bin/pip'
pip install -e <path-to-gpt4all-repository>/gpt4all-bindings/python  # installs the 'gpt4all' bindings package in the virtual environment

Now copy the llama.dll and llmodel.dll from your chat application into the venv/bin folder, so they’re right next to your virtual environment’s python.exe. Then here is where it might be simpler with these DLLs. Your system should already know where the dependencies are. To check, run:

ldd venv/bin/llama.dll
ldd venv/bin/llmodel.dll

If you don’t see a => not found in the output of these two commands, you’re good to go.

Now in order to make the Python bindings use these and not the ones you’d otherwise compile, you need to edit the pyllmodel.py file of the Python bindings, it’s in <path-to-repository>/gpt4all-bindings/python/gpt4all/pyllmodel.py. There are two things to do:

  • don’t rely on pkg_resources to find the libraries (they’re already next to the Python interpreter)
  • use the proper names

So comment out the pkg_resources lines and simply put in the right DLL names where they’re loaded with ctypes.CDLL(...). Also comment out the next two llama_dir/llmodel_dir = ... because they’re referencing themselves but aren’t set:

diff --git a/gpt4all-bindings/python/gpt4all/pyllmodel.py b/gpt4all-bindings/python/gpt4all/pyllmodel.py
index 6117c9f..930ec38 100644
--- a/gpt4all-bindings/python/gpt4all/pyllmodel.py
+++ b/gpt4all-bindings/python/gpt4all/pyllmodel.py
@@ -39,15 +39,17 @@ def load_llmodel_library():

     llmodel_file = "libllmodel" + '.' + c_lib_ext
     llama_file = "libllama" + '.' + c_lib_ext
-    llama_dir = str(pkg_resources.resource_filename('gpt4all', os.path.join(LLMODEL_PATH, llama_file)))
-    llmodel_dir = str(pkg_resources.resource_filename('gpt4all', os.path.join(LLMODEL_PATH, llmodel_file)))
+    #llama_dir = str(pkg_resources.resource_filename('gpt4all', os.path.join(LLMODEL_PATH, llama_file)))
+    #llmodel_dir = str(pkg_resources.resource_filename('gpt4all', os.path.join(LLMODEL_PATH, llmodel_file)))

     # For windows
-    llama_dir = llama_dir.replace("\\", "\\\\")
-    llmodel_dir = llmodel_dir.replace("\\", "\\\\")
+    #llama_dir = llama_dir.replace("\\", "\\\\")
+    #llmodel_dir = llmodel_dir.replace("\\", "\\\\")

-    llama_lib = ctypes.CDLL(llama_dir, mode=ctypes.RTLD_GLOBAL)
-    llmodel_lib = ctypes.CDLL(llmodel_dir)
+    #llama_lib = ctypes.CDLL(llama_dir, mode=ctypes.RTLD_GLOBAL)
+    #llmodel_lib = ctypes.CDLL(llmodel_dir)
+    llama_lib = ctypes.CDLL('llama.dll', mode=ctypes.RTLD_GLOBAL)
+    llmodel_lib = ctypes.CDLL('llmodel.dll')

     return llmodel_lib, llama_lib

It then looks like this:

    llmodel_file = "libllmodel" + '.' + c_lib_ext
    llama_file = "libllama" + '.' + c_lib_ext
    #llama_dir = str(pkg_resources.resource_filename('gpt4all', os.path.join(LLMODEL_PATH, llama_file)))
    #llmodel_dir = str(pkg_resources.resource_filename('gpt4all', os.path.join(LLMODEL_PATH, llmodel_file)))

    # For windows
    #llama_dir = llama_dir.replace("\\", "\\\\")
    #llmodel_dir = llmodel_dir.replace("\\", "\\\\")

    #llama_lib = ctypes.CDLL(llama_dir, mode=ctypes.RTLD_GLOBAL)
    #llmodel_lib = ctypes.CDLL(llmodel_dir)
    llama_lib = ctypes.CDLL('llama.dll', mode=ctypes.RTLD_GLOBAL)
    llmodel_lib = ctypes.CDLL('llmodel.dll')

    return llmodel_lib, llama_lib

Now with all that done – and hopefully I didn’t make any mistakes when writing the instructions – you should be able to run an example from within that console.

To target a virtual environment from within Visual Studio, point it to virtual environment’s python.exe. It should know how to handle those.