gpt4all: Unable to retrieve list of all GPU devices
System Info
System: Google Colab
GPU: NVIDIA T4 16 GB
OS: Ubuntu
gpt4all version: latest
Information
- The official example notebooks/scripts
 - My own modified scripts
 
Related Components
- backend
 - bindings
 - python-bindings
 - chat-ui
 - models
 - circleci
 - docker
 - api
 
Reproduction
from gpt4all import GPT4All
model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin", device='gpu')
Expected behavior
Expected behaviour: It should run like how it does with CPU
but instead of I am getting this error
----> 1 model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin", device='gpu')
1 frames
[/usr/local/lib/python3.10/dist-packages/gpt4all/pyllmodel.py](https://localhost:8080/#) in init_gpu(self, model_path, device)
    231             all_devices_ptr = self.llmodel_lib.llmodel_available_gpu_devices(self.model, 0, ctypes.byref(num_devices))
    232             if not all_devices_ptr:
--> 233                 raise ValueError("Unable to retrieve list of all GPU devices")
    234             all_gpus = [all_devices_ptr[i].name.decode('utf-8') for i in range(num_devices.value)]
    235 
ValueError: Unable to retrieve list of all GPU devices
About this issue
- Original URL
 - State: closed
 - Created 10 months ago
 - Comments: 19 (4 by maintainers)
 
You’re right, finally I could make it work with a Quadro RTX 4000, I tested with mistral-7b-instruct-v0.1.Q4_0.gguf model.
I configured an Ubuntu 22.04 server with the following:
and to verify the configuration:
Packages in a python virtual environment (venv) just for run the minimum demo:
LLVMpipe is only a software emulation of a graphics card. So that’s not going to be useful even if it works. Is that the only GPU device it shows on that machine? If so, there is no dedicated GPU.
see https://gist.github.com/apage43/0cf7198af35cf5b10f844970ec2d01c0 for how to run it on colab - you need to
apt installa couple extra pkgs