fast-stable-diffusion: WARNING[XFORMERS]: xFormers can't load

WARNING[XFORMERS]: xFormers can’t load C++/CUDA extensions. xFormers was built for:

PyTorch 2.1.0+cu118 with CUDA 1106 (you have 2.0.1+cu118)
Python  3.9.16 (you have 3.10.12)

Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won’t be available. Set XFORMERS_MORE_DETAILS=1 for more details Traceback (most recent call last): File “/content/diffusers/examples/dreambooth/train_dreambooth.py”, line 803, in <module> main() File “/content/diffusers/examples/dreambooth/train_dreambooth.py”, line 519, in main unet.enable_xformers_memory_efficient_attention() File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 251, in enable_xformers_memory_efficient_attention self.set_use_memory_efficient_attention_xformers(True, attention_op) File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 219, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 215, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 215, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 215, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 212, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 219, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 215, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 215, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 212, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File “/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py”, line 151, in set_use_memory_efficient_attention_xformers raise e File “/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py”, line 145, in set_use_memory_efficient_attention_xformers _ = xformers.ops.memory_efficient_attention( File “/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py”, line 223, in memory_efficient_attention return _memory_efficient_attention( File “/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py”, line 321, in _memory_efficient_attention return _memory_efficient_attention_forward( File “/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py”, line 337, in _memory_efficient_attention_forward op = _dispatch_fw(inp, False) File “/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/dispatch.py”, line 120, in _dispatch_fw return _run_priority_list( File “/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/dispatch.py”, line 63, in _run_priority_list raise NotImplementedError(msg) NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(1, 2, 1, 40) (torch.float32) key : shape=(1, 2, 1, 40) (torch.float32) value : shape=(1, 2, 1, 40) (torch.float32) attn_bias : <class ‘NoneType’> p : 0.0 decoderF is not supported because: xFormers wasn’t build with CUDA support attn_bias type is <class ‘NoneType’> operator wasn’t built - see python -m xformers.info for more info flshattF@0.0.0 is not supported because: xFormers wasn’t build with CUDA support requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) operator wasn’t built - see python -m xformers.info for more info tritonflashattF is not supported because: xFormers wasn’t build with CUDA support requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) operator wasn’t built - see python -m xformers.info for more info triton is not available requires GPU with sm80 minimum compute capacity, e.g., A100/H100/L4 Only work on pre-MLIR triton for now cutlassF is not supported because: xFormers wasn’t build with CUDA support operator wasn’t built - see python -m xformers.info for more info smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn’t build with CUDA support operator wasn’t built - see python -m xformers.info for more info unsupported embed per head: 40 Traceback (most recent call last): File “/usr/local/bin/accelerate”, line 8, in <module> sys.exit(main()) File “/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py”, line 43, in main args.func(args) File “/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py”, line 837, in launch_command simple_launcher(args) File “/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py”, line 354, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command ‘[’/usr/bin/python3’, ‘/content/diffusers/examples/dreambooth/train_dreambooth.py’, ‘–image_captions_filename’, ‘–train_only_unet’, ‘–save_starting_step=2500’, ‘–save_n_steps=0’, ‘–Session_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/ohwxCaraRVB’, ‘–pretrained_model_name_or_path=/content/stable-diffusion-custom’, ‘–instance_data_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/ohwxCaraRVB/instance_images’, ‘–output_dir=/content/models/ohwxCaraRVB’, ‘–captions_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/ohwxCaraRVB/captions’, ‘–instance_prompt=’, ‘–seed=994242’, ‘–resolution=512’, ‘–mixed_precision=fp16’, ‘–train_batch_size=1’, ‘–gradient_accumulation_steps=1’, ‘–use_8bit_adam’, ‘–learning_rate=2e-06’, ‘–lr_scheduler=linear’, ‘–lr_warmup_steps=0’, ‘–max_train_steps=4500’]’ returned non-zero exit status 1. Something went wrong

About this issue

  • Original URL
  • State: closed
  • Created 6 months ago
  • Comments: 16

Most upvoted comments

Disclaimer… I am not a coder and I have no idea what I am doing but this worked for me for dream booth:

!pip install lmdb
!pip instal-q torch==2.0.0+cu118 torchvision==0.15.1+cu118 torchaudio==2.0.1+cu118 torchtext==0.15.1 torchdata==0.6.0 --extra-index-url https://download.pytorch.org/whl/cu118 -U
!pip install xformers==0.0.19 triton==2.0.0 -U
!pip install -U bitsandbytes
Screenshot 2023-12-17 at 15 35 03

@mertayd0 Check executing this comand on colab before executing the last cell: !pip install --pre -U xformers

@TheLastBen hey man, did you get chance to look at this?

@mertayd0 so on the main page of fast_stable_diffusion_AUTOMATIC1111.ipynb (where you run the cells) You have 7 cells: Connect Google Drive Install/Update Automatic 1111 Requirements Model Download/Load Download Lora ControlNet Start Stable Diffusion

Scroll down to the cell called ControlNet (written in bold) Highlight it by clicking the cell once Then hover the mouse just under the cell and 2 option will appear in small boxes: +code +text Click +code, this will add a cell in the right place

I’m an artist not an encoder so I get this can all seem very overwhelming I too like nice clear easy to follow instructions 😃

@mertayd0 - Let me tell you how I did it:

Insert a new cell under ControlNet Paste (Ctrl V) “!pip install --pre -U xformers” into cell Run all After the new cell has run, it will stop and ask you to restart (It also takes a few min to run this new cell) Restart session and run all

This should work on Google Colab 😃

@elBlacksmith - Thanks man, worked like a charm @mertayd0 - Hi, did you follow the on screen instructions and restart run after the first pass of “!pip install --pre -U xformers”?