gpt4all: Unable to retrieve list of all GPU devices

System Info

System: Google Colab
GPU: NVIDIA T4 16 GB
OS: Ubuntu
gpt4all version: latest

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • backend
  • bindings
  • python-bindings
  • chat-ui
  • models
  • circleci
  • docker
  • api

Reproduction

from gpt4all import GPT4All

model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin", device='gpu')

Expected behavior

Expected behaviour: It should run like how it does with CPU

but instead of I am getting this error

----> 1 model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin", device='gpu')

1 frames

[/usr/local/lib/python3.10/dist-packages/gpt4all/pyllmodel.py](https://localhost:8080/#) in init_gpu(self, model_path, device)
    231             all_devices_ptr = self.llmodel_lib.llmodel_available_gpu_devices(self.model, 0, ctypes.byref(num_devices))
    232             if not all_devices_ptr:
--> 233                 raise ValueError("Unable to retrieve list of all GPU devices")
    234             all_gpus = [all_devices_ptr[i].name.decode('utf-8') for i in range(num_devices.value)]
    235 

ValueError: Unable to retrieve list of all GPU devices

About this issue

  • Original URL
  • State: closed
  • Created 10 months ago
  • Comments: 19 (4 by maintainers)

Most upvoted comments

Those are all relatively old GPUs, they may not be supported. I know my Tesla P40 from 2016 is not.

You’re right, finally I could make it work with a Quadro RTX 4000, I tested with mistral-7b-instruct-v0.1.Q4_0.gguf model.

I configured an Ubuntu 22.04 server with the following:

  sudo apt install libnvidia-gl-535-server
  sudo apt install nvidia-driver-535-server
  sudo apt install libvulkan1

and to verify the configuration:

  sudo apt install nvidia-utils-535-server  #use nvidia-smi -L for check that the driver recognize the card
  sudo apt install vulkan-tools  #vulkaninfo for check that vulkan recognize the card

Packages in a python virtual environment (venv) just for run the minimum demo:

pip install langchain
pip install gpt4all
pip install sentence-transformers
import gpt4all
gptj = gpt4all.GPT4All(
    model_name="mistral-7b-instruct-v0.1.Q4_0.gguf",allow_download=True, device="nvidia")
while True:
    prompt = input('\nQuestion: ')
    if(prompt == 'q'):
        break;
    res = gptj.generate(prompt=prompt, max_tokens=1500)
    print(res)
deviceName = llvmpipe (LLVM 15.0.7, 256 bits)
driverID = DRIVER_ID_MESA_LLVMPIPE
driverName = llvmpipe

LLVMpipe is only a software emulation of a graphics card. So that’s not going to be useful even if it works. Is that the only GPU device it shows on that machine? If so, there is no dedicated GPU.

see https://gist.github.com/apage43/0cf7198af35cf5b10f844970ec2d01c0 for how to run it on colab - you need to apt install a couple extra pkgs