fast-stable-diffusion: No such operator xformers::efficient_attention_forward_cutlass

Traceback (most recent call last): File “/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py”, line 45, in f res = list(func(*args, **kwargs)) File “/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py”, line 28, in f res = func(*args, **kwargs) File “/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/txt2img.py”, line 49, in txt2img processed = process_images(p) File “/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py”, line 430, in process_images res = process_images_inner(p) File “/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py”, line 531, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File “/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py”, line 664, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File “/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers.py”, line 507, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File “/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers.py”, line 422, in launch_sampling return func() File “/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers.py”, line 507, in <lambda> samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File “/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py”, line 27, in decorate_context return func(*args, **kwargs) File “/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/sampling.py”, line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) File “/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py”, line 1190, in _call_impl return forward_call(*input, **kwargs) File “/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers.py”, line 315, in forward x_out = self.inner_model(x_in, sigma_in, cond={“c_crossattn”: [cond_in], “c_concat”: [image_cond_in]}) File “/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py”, line 1190, in _call_impl return forward_call(*input, **kwargs) File “/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/external.py”, line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File “/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/external.py”, line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File “/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py”, line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File “/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py”, line 1190, in _call_impl return forward_call(*input, **kwargs) File “/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py”, line 1329, in forward out = self.diffusion_model(x, t, context=cc) File “/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py”, line 1190, in _call_impl return forward_call(*input, **kwargs) File “/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py”, line 776, in forward h = module(h, emb, context) File “/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py”, line 1190, in _call_impl return forward_call(*input, **kwargs) File “/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py”, line 84, in forward x = layer(x, context) File “/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py”, line 1190, in _call_impl return forward_call(*input, **kwargs) File “/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/attention.py”, line 334, in forward x = block(x, context=context[i]) File “/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py”, line 1190, in _call_impl return forward_call(*input, **kwargs) File “/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_checkpoint.py”, line 4, in BasicTransformerBlock_forward return checkpoint(self._forward, x, context) File “/usr/local/lib/python3.8/dist-packages/torch/utils/checkpoint.py”, line 249, in checkpoint return CheckpointFunction.apply(function, preserve, *args) File “/usr/local/lib/python3.8/dist-packages/torch/utils/checkpoint.py”, line 107, in forward outputs = run_function(*args) File “/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/attention.py”, line 272, in _forward x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x File “/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py”, line 1190, in _call_impl return forward_call(*input, **kwargs) File “/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_optimizations.py”, line 227, in xformers_attention_forward out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None) File “/usr/local/lib/python3.8/dist-packages/xformers/ops/memory_efficient_attention.py”, line 967, in memory_efficient_attention return op.forward_no_grad( File “/usr/local/lib/python3.8/dist-packages/xformers/ops/memory_efficient_attention.py”, line 343, in forward_no_grad return cls.FORWARD_OPERATOR( File “/usr/local/lib/python3.8/dist-packages/xformers/ops/common.py”, line 11, in no_such_operator raise RuntimeError( RuntimeError: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with python setup.py develop?

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 13
  • Comments: 55 (7 by maintainers)

Most upvoted comments

I’ve put the xformers wheels compiled by facebookresearch here:

https://github.com/brian6091/xformers-wheels/releases

This works on Google Colab for Tesla T4 (free) and A100 (premium).

Drop this in whatever cell you’re running the xformers install:

!pip install https://github.com/brian6091/xformers-wheels/releases/download/0.0.15.dev0%2B4c06c79/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl

fixed (for the T4 at least), re-run the requirements cell

After using

!python setup.py build develop

I still get the same error below.

RuntimeError: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with python setup.py develop?

I noticed this early in the start process:

/usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
WARNING:xformers:WARNING: /usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
Need to compile C++ extensions to get sparse attention support. Please run python setup.py build develop

Great, I’ll add the libs to the dependencies files

Yes, no problems.

that’s not my notebook and not my repo, this is the repo : https://github.com/TheLastBen/fast-stable-diffusion

click on the thumbnail in the readme portion to get to the latest colabs