fast-stable-diffusion: NansException when trying to generate larger images.
Here’s the full error message
0% 0/20 [00:03<?, ?it/s]
Error completing request
Arguments: ('task(681nzcxdv4qmpin)', '', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 1800, 1200, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.external_code.ControlNetUnit object at 0x7f9625db67f0>, <scripts.external_code.ControlNetUnit object at 0x7f9625db6520>, <scripts.external_code.ControlNetUnit object at 0x7f9625db6880>, <scripts.external_code.ControlNetUnit object at 0x7f9625db60d0>, <scripts.external_code.ControlNetUnit object at 0x7f9625db6910>, <scripts.external_code.ControlNetUnit object at 0x7f9625db61f0>, <scripts.external_code.ControlNetUnit object at 0x7f9625db6550>, <scripts.external_code.ControlNetUnit object at 0x7f9625db6b20>, <scripts.external_code.ControlNetUnit object at 0x7f9625dbb9a0>, <scripts.external_code.ControlNetUnit object at 0x7f9625dbb910>, False, False, 'Horizontal', '1,1', '0.2', False, False, False, 'Attention', False, False, 3, 0, False, False, 0, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, None, False, None, False, None, False, None, False, None, False, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
File "/content/gdrive/.shortcut-targets-by-id/1S33CUH0-cqG4EkR8r0vGFnH9aECswn4r/sd/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/gdrive/.shortcut-targets-by-id/1S33CUH0-cqG4EkR8r0vGFnH9aECswn4r/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/gdrive/.shortcut-targets-by-id/1S33CUH0-cqG4EkR8r0vGFnH9aECswn4r/sd/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/content/gdrive/.shortcut-targets-by-id/1S33CUH0-cqG4EkR8r0vGFnH9aECswn4r/sd/stable-diffusion-webui/modules/processing.py", line 503, in process_images
res = process_images_inner(p)
File "/content/gdrive/.shortcut-targets-by-id/1S33CUH0-cqG4EkR8r0vGFnH9aECswn4r/sd/stable-diffusion-webui/modules/processing.py", line 653, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/content/gdrive/.shortcut-targets-by-id/1S33CUH0-cqG4EkR8r0vGFnH9aECswn4r/sd/stable-diffusion-webui/modules/processing.py", line 869, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/content/gdrive/.shortcut-targets-by-id/1S33CUH0-cqG4EkR8r0vGFnH9aECswn4r/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 358, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/content/gdrive/.shortcut-targets-by-id/1S33CUH0-cqG4EkR8r0vGFnH9aECswn4r/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 234, in launch_sampling
return func()
File "/content/gdrive/.shortcut-targets-by-id/1S33CUH0-cqG4EkR8r0vGFnH9aECswn4r/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 358, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/usr/local/lib/python3.9/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/gdrive/.shortcut-targets-by-id/1S33CUH0-cqG4EkR8r0vGFnH9aECswn4r/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 152, in forward
devices.test_for_nans(x_out, "unet")
File "/content/gdrive/.shortcut-targets-by-id/1S33CUH0-cqG4EkR8r0vGFnH9aECswn4r/sd/stable-diffusion-webui/modules/devices.py", line 152, in test_for_nans
raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
This all started a few days ago. There are no issues when I try to generate smaller images. 1280x1280 is fine, but if I were to go 1200x1800 or 1536x1536, that error is thrown. Basically, I can’t hires fix moderately sized images anymore. I’m using the exact same model, loras, settings, prompts and everything else as I always have.
I’ve tried:
- running it without connecting my gdrive, ensuring it use a freshly downloaded SD with nothing extra
- using the Use_Latest_Working_Commit checkbox
- reverting back to even older versions
- switch to different google accounts Nothing has worked so far.
Maybe something was changed on colab’s side? Any help would be appreciated.
About this issue
- Original URL
- State: open
- Created a year ago
- Comments: 28 (8 by maintainers)
@buckwheaton Your fix has been implemented
Ok the following fixed this, at least for my version of this problem: remove --xformers as a commandline argument and add --opt-sdp-attention. I did this at the bottom of the Start Stable Diffusion section of the colab notebook - show code then scroll all the way down. I made this change in each of the three versions of the command line although I don’t really know what I’m doing and probably didn’t have to. 😃. Downside is that xformers is more memory efficient so I can’t upscale quite as large in hires fix now but at least it’s sort of working.
if A1111 fixes it, it will reflect on the notebook