bitsandbytes: Missing Windows support

Currently, the library uses precompiled Linux binaries. I am unsure how compatible these are with standard PyTorch installs on Windows. It might be that the binaries need to be compiled against mingw32/64 to create functional binaries for Windows.

The most helpful would be a case where a person is able to compile from source and use the library. This will require altering the Makefile file. If this works, we can add instructions on compiling for Windows as a first step before doing a full-scale Windows deployment of binaries on pip.

Since I do not have a Windows machine, any help is wanted on this!

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 7
  • Comments: 57

Most upvoted comments

After some really tedious debugging and tackling various hidden problems, I managed to compile the whole module. This is the end result: https://github.com/DeXtmL/bitsandbytes-win-prebuilt

The binaries are compiled against CUDAToolkit 11.6 and Visual Studio 2022. I am able to make inferences nearly identical to “normal” fp16 version. So this is kind of a confirmation for “it works” shout-out. No vigorous testing was conducted though. @TimDettmers Finally, the “cuda_setup” part of the source code is entirely incompatible with Windows, there are loads of hardcoded routines; so I used a quick makeshift patch instead of making it proper, that’s also why I’m not posting my changes or making PR for now. If you are eager to test:

in cuda_setup/main.py:
make evaluate_cuda_setup() always return "libbitsandbytes_cuda116.dll"
in ./cextension.py:
change ct.cdll.LoadLibrary(binary_path) to ct.cdll.LoadLibrary(str(binary_path))

That should do the trick.

Hopefully this can help someone in the Windows territory; let’s hope the official windows support come fast.

If anyone else is still searching for a Windows solution and doesn’t want to lose a few hours to the same issue just use this repo: https://github.com/jllllll/bitsandbytes-windows-webui

The README even includes a pip install command and (as of 0.41.1) installs the newest version of bitsandbytes… only difference is that it’s compatible with windows. Includes .dll instead of .so and cuda_setup\main.py works with us

To use this with facebook-research/LLaMA-7b within text-generation-webui on windows 11:

  1. git pull oobabooga/text-generation-webui
  2. follow the installation instructions for conda
  3. download HuggingFace converted model weights for LLaMA, or convert them by yourself from the original weights. Both leaked on torrent and even on the official facebook llama repo as an unapproved PR.
  4. copy the llama-7b folder (or whatever size you want to run) into text-generation-webui\models. The folder should contain the config.json, generation_config.json, pytorch_model.bin.index.json, special_tokens_map.json, tokenizer.model, tokenizer_config.json as well as all the 33 pytorch_model-000xx-of-00033.bin files
  5. put libbitsandbytes_cuda116.dll in C:\Users\xxx\miniconda3\envs\textgen\lib\site-packages\bitsandbytes\
  6. edit \bitsandbytes\cuda_setup\main.py:

search for: if not torch.cuda.is_available(): return 'libsbitsandbytes_cpu.so', None, None, None, None replace with: if torch.cuda.is_available(): return 'libbitsandbytes_cuda116.dll', None, None, None, None

search for this twice: self.lib = ct.cdll.LoadLibrary(binary_path) replace with: self.lib = ct.cdll.LoadLibrary(str(binary_path))

why we still cant use on windows

this is 2023

After some really tedious debugging and tackling various hidden problems, I managed to compile the whole module. This is the end result: https://github.com/DeXtmL/bitsandbytes-win-prebuilt The binaries are compiled against CUDAToolkit 11.6 and Visual Studio 2022. I am able to make inferences nearly identical to “normal” fp16 version. So this is kind of a confirmation for “it works” shout-out. No vigorous testing was conducted though. @TimDettmers Finally, the “cuda_setup” part of the source code is entirely incompatible with Windows, there are loads of hardcoded routines; so I used a quick makeshift patch instead of making it proper, that’s also why I’m not posting my changes or making PR for now. If you are eager to test:

in cuda_setup/main.py:
make evaluate_cuda_setup() always return "libbitsandbytes_cuda116.dll"
in ./cextension.py:
change ct.cdll.LoadLibrary(binary_path) to ct.cdll.LoadLibrary(str(binary_path))

That should do the trick. Hopefully this can help someone in the Windows territory; let’s hope the official windows support come fast.

Where do you put the pre-built file to activate adam?

You put them in site-packages\bitsandbytes

If anyone else is still searching for a Windows solution ~and doesn’t want to lose a few hours to the same issue~ just use this repo: https://github.com/jllllll/bitsandbytes-windows-webui

The README even includes a pip install command and (as of 0.41.1) installs the newest version of bitsandbytes… only difference is that it’s compatible with windows. Includes .dll instead of .so and cuda_setup\main.py works with us

wow very nice

Quick final follow up, it built fine in Debug with the above, i actually don’t know quite how. For release mode I did actually have to build the pthread library (just another mkdir build, cmake …, open solution, build all in release mode), then slightly modify the cmake file. (Probably could have just done cmake … ; cmake --build . -j4 --config Release ; to build pthread) ~CMakeLists.txt~

I don’t know why it worked before in debug mode at all, because I had link_libraries wrong (don’t have the -l in cmake in front), and for release mode had to fix that and include pthreadVC3.lib … here’s the final final from me CMakeLists.txt for the csrc folder ~CMakeLists.txt~ **EDIT I got it to load up, but it says it compiled without GPU support so i’m still working on it. **EDIT2 Still working on it. Added “add_compile_definitions(BUILD_CUDA)” then checked the resulting vc files and it does enable the BUILD_CUDA define, and in pythonInterface.cpp visual studio says BUILD_CUDA is defined, I can see where the cadam32bit_g32 is generated via that macro, but not quite sure why when it loads the .dll - lib.cadam32bit_g32 throws an attribute error. **EDIT3 I finally got it to work. It took a couple hours (long compile times) but I finally got one that exports all symbols. the trick was putting the thing in a different cmake file ffs. The final two root/CMakeLists.txt CMakeLists.txt

root/csrc/CMakeLists.txt CMakeLists.txt

mkdir build, cd build, cmake … , cmake --build ./ -j4 --config Release , .dll is put into build/csrc/Release/bitsandbyts.dll

just for fun here’s it running gpt-j-6b on a RTX3080 on windows 11 with cuda 113

import torch
import transformers

access_token = "hf_"

from transformers.models.gptj import GPTJForCausalLM


device = 'cuda' if torch.cuda.is_available() else 'cpu'

tokenizer = transformers.AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B", use_auth_token=access_token, device_map='auto')
gpt = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", use_auth_token=access_token, device_map='auto', load_in_8bit=True, low_cpu_mem_usage=True).to(device)

prompt = tokenizer("A cat sat on a mat", return_tensors='pt')
prompt = {key: value.to(device) for key, value in prompt.items()}
out = gpt.generate(**prompt, min_length=128, max_length=128, do_sample=True)
tokenizer.decode(out[0])
>>> tokenizer.decode(out[0])
"A cat sat on a mat, staring at me, his back legs tucked under him, tail swerving in quick little circles.\n\nI squatted next to him and leaned against the cold wooden wall. I'd come down here to feed the cat, but I'd been too tired and cold and my stomach still ached and my hands and feet were numb from spending the night in a tree. Besides, this was not my house, not my town, not my time. The cat stared at me, his green eyes the only proof he knew I was the intruder he was protecting.\n\nI wished I'd brought a blanket"

Screenshot 2022-11-29 150631

“compile it yourself” can be a blocker indeed, especially as this is lib for Python and compilation errors one may get will be from C++. I could find time to make a PR with whatever was done till now and polish it where necessary, though not sure if someone is doing it already.

Bitsandbytes was not supported windows before, but my method can support windows.(yuhuang) 1 open folder J:\StableDiffusion\sdwebui,Click the address bar of the folder and enter CMD or WIN+R, CMD 。enter,cd /d J:\StableDiffusion\sdwebui 2 J:\StableDiffusion\sdwebui\py310\python.exe -m pip uninstall bitsandbytes

3 J:\StableDiffusion\sdwebui\py310\python.exe -m pip uninstall bitsandbytes-windows

4 J:\StableDiffusion\sdwebui\py310\python.exe -m pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.1-py3-none-win_amd64.whl

Replace your SD venv directory file(python.exe Folder) here(J:\StableDiffusion\sdwebui\py310)

@FurkanGozukara we can use it on Windows as many people do (me including https://github.com/stoperro/bitsandbytes_windows) with this community effort. It’s just less convenient as support is not yet merged into official branch and you need to compile it yourself.

Note that even Microsoft doesn’t care about Windows support of their own AI tools (microsoft/DeepSpeed#2427), so having some support here when authors don’t necessarily have Windows machine is heartening.

I know that branch really good but really older commit

I am making tutorials for regular people. It is not an option for them to “compile it yourself”

I hope windows support gets added

@acpopescu Now it works well. Thank you a lot!

@km19809 - Ok, it’s my Cmakelists.txt. My initial revision had a bug where it was using the initially set architecture, 52 that why it was working for you. I am missing the additional architectures in the latest release that the Makefile has. https://github.com/acpopescu/bitsandbytes/releases/tag/v0.37.2-win.1 should have your architecture when using nocublast now.

I’ve got this compiling under CUDA 11.7 with CMake if y’all are interested. I DID NOT RUN ANY TESTS yet, it is too late in the day

Prototype CMAKE file, it is missing functionality of the makefile. It is usable to target a single config, and does not bring in /dependencies/cub.

https://github.com/acpopescu/bitsandbytes/tree/cmake_windows Still WIP.

To Deploy copy build/Release/*.* to ./bitsandbytes/

For reference and diff - #229

Did you look at https://github.com/TimDettmers/bitsandbytes/pull/127 by any chance?

After some really tedious debugging and tackling various hidden problems, I managed to compile the whole module. This is the end result: https://github.com/DeXtmL/bitsandbytes-win-prebuilt

The binaries are compiled against CUDAToolkit 11.6 and Visual Studio 2022. I am able to make inferences nearly identical to “normal” fp16 version. So this is kind of a confirmation for “it works” shout-out. No vigorous testing was conducted though. @TimDettmers Finally, the “cuda_setup” part of the source code is entirely incompatible with Windows, there are loads of hardcoded routines; so I used a quick makeshift patch instead of making it proper, that’s also why I’m not posting my changes or making PR for now. If you are eager to test:

in cuda_setup/main.py:
make evaluate_cuda_setup() always return "libbitsandbytes_cuda116.dll"
in ./cextension.py:
change ct.cdll.LoadLibrary(binary_path) to ct.cdll.LoadLibrary(str(binary_path))

That should do the trick.

Hopefully this can help someone in the Windows territory; let’s hope the official windows support come fast.

This solution is still valid and the linked binaries work with cuda 11.7 as well (at least for Adam 8 bit and on win 10). But the location of ct.cdll.LoadLibrary changed and it can now be found in ./cuda_setup/main.py as well and not in ./cextension.py

just replace both occurrences of self.lib = ct.cdll.LoadLibrary(binary_path) with self.lib = ct.cdll.LoadLibrary(str(binary_path))

Is it possible to run bitsandbytes with 2060RTX 6GB on Windows 10?

I don’t see why it wouldn’t run on a 2060, just be aware it doesn’t eliminate vram requirements, just reduces them. Still wouldn’t be able to run chatgpt for example with it’s 800GB+ 32-bit precision vram requirement (if had access to that). Any model that takes <24gb of vram in 32bit, or <12gb of vram in 16bit mode, should be able to fit in 6gb at 8bit.

From my not so great memory it’s something like:

To build the bitsandbytes project for Windows, you will need two programs: cmake and nvcc. You can a build environment such as Visual Studio and Miniconda.

Open the command line interface (CLI) for your build environment. (Start menu/visual studio/ one of them consoles) Activate your chosen environment(Miniconda) and install necessary packages. ( cuda-nvcc iirc? Probably a cuda environment like https://pytorch.org/get-started/locally/ ) Place cmake files in the right location. Build pthreads (if necessary) using cmake (The same commands as below). Edit1 ( download https://github.com/GerHobbelt/pthread-win32 and extract the whole thing there so it’s /project_root/dependencies/pthread-win32-main/pthread.h ) END EDIT1

Run the following commands: (Note -j4 means use 4 cores to build. If you don’t have 4 cores, or you have a lot more, change that number.)

(Assuming on C:\ drive, if on other drive, change letter on first and second line appropriately.

C:
cd C:\PROJECT_ROOT\  ( or cd C:\PROJECT_ROOT\dependencies\pthread-win32-main )
mkdir build
cd build
cmake ..
cmake --build ./ -j4 --config Release

The resulting dll file will be in build/csrc/Release/bitsandbytes.dll. Edit2 When it errors about unistd.h or getopt.h, open that file and comment out the #include - although a more proper way would to be change to detect _MSC_VER, if found just don’t include unistd.h ( wouldn’t test against WIN32 cause can be true in mingw, WSL,(etc) environment where using unistd.h would be required still, where _MSC_VER indicates the microsoft visual studio compiler version ) EG:

#ifndef _MSC_VER
#include <unistd.h>
#endif

END EDIT2