bitsandbytes: bitsandbytes just never works.
I’ve tried with different repos to finetune llms. Alsways when I see bitsandbytes are needed I already know this will be a pain in the ass. And this time was no difference.
Used the repo of Llama-Factory to fine tune a model. And needed bitsandbytes.
It tells there is a bug and following the instructions I write python -m bitsandbytes to see what is wrong. But, it just sends me the Bug Report from before.
`===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
bin C:\Users\zaesa\anaconda3\envs\llama_factory\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll False C:\Users\zaesa\anaconda3\envs\llama_factory\lib\site-packages\bitsandbytes\cuda_setup\main.py:156: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath(‘C:/Users/zaesa/anaconda3/envs/llama_factory/bin’)} warn(msg) C:\Users\zaesa\anaconda3\envs\llama_factory\lib\site-packages\bitsandbytes\cuda_setup\main.py:156: UserWarning: C:\Users\zaesa\anaconda3\envs\llama_factory did not contain [‘cudart64_110.dll’, ‘cudart64_120.dll’, ‘cudart64_12.dll’] as expected! Searching further paths… warn(msg) CUDA SETUP: CUDA runtime path found: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\cudart64_110.dll CUDA SETUP: Highest compute capability among GPUs detected: 8.9 CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary C:\Users\zaesa\anaconda3\envs\llama_factory\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll… Could not find module ‘C:\Users\zaesa\anaconda3\envs\llama_factory\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll’ (or one of its dependencies). Try using the full path with constructor syntax. CUDA SETUP: Something unexpected happened. Please compile from source: git clone git@github.com:TimDettmers/bitsandbytes.git cd bitsandbytes CUDA_VERSION=118 make cuda11x python setup.py install Traceback (most recent call last): File “C:\Users\zaesa\anaconda3\envs\llama_factory\lib\runpy.py”, line 187, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, Error) File “C:\Users\zaesa\anaconda3\envs\llama_factory\lib\runpy.py”, line 146, in get_module_details return get_module_details(pkg_main_name, error) File “C:\Users\zaesa\anaconda3\envs\llama_factory\lib\runpy.py”, line 110, in get_module_details import(pkg_name) File "C:\Users\zaesa\anaconda3\envs\llama_factory\lib\site-packages\bitsandbytes_init.py", line 6, in <module> from . import cuda_setup, utils, research File "C:\Users\zaesa\anaconda3\envs\llama_factory\lib\site-packages\bitsandbytes\research_init.py", line 1, in <module> from . import nn File "C:\Users\zaesa\anaconda3\envs\llama_factory\lib\site-packages\bitsandbytes\research\nn_init.py", line 1, in <module> from .modules import LinearFP8Mixed, LinearFP8Global File “C:\Users\zaesa\anaconda3\envs\llama_factory\lib\site-packages\bitsandbytes\research\nn\modules.py”, line 8, in <module> from bitsandbytes.optim import GlobalOptimManager File "C:\Users\zaesa\anaconda3\envs\llama_factory\lib\site-packages\bitsandbytes\optim_init.py", line 6, in <module> from bitsandbytes.cextension import COMPILED_WITH_CUDA File “C:\Users\zaesa\anaconda3\envs\llama_factory\lib\site-packages\bitsandbytes\cextension.py”, line 20, in <module> raise RuntimeError(‘’’ RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues`
Would really appreciate how to solve this. I can’t imagine how many people is going through this for fine tune and use LLMs. Because doesn’t matter the path I take, when there is bitsandbytes involved it just breaks things.
Appreciate any suggestion.
About this issue
- Original URL
- State: closed
- Created 7 months ago
- Reactions: 4
- Comments: 17
Can I bump this 1000 times?
I am using google colab (cuda 12.2 default) I installed cuda 11.8 (because I use torch==2.0.1+cu118…) I added the location of cuda 11.8 to the path and it worked bitsandbytes
CODE:
!apt-get update !apt-get install cuda-toolkit-11-8 import os os.environ[“LD_LIBRARY_PATH”] += “:” + “/usr/local/cuda-11/lib64” os.environ[“LD_LIBRARY_PATH”] += “:” + “/usr/local/cuda-11.8/lib64”