fast-stable-diffusion: "Error loading script: lora_script.py" while starting SD

Using latest notebook

Error loading script: lora_script.py
Traceback (most recent call last):
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 256, in load_scripts
    script_module = script_loading.load_module(scriptfile.path)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 11, in load_module
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/scripts/lora_script.py", line 4, in <module>
    import lora
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 238, in <module>
    def lora_apply_weights(self: torch.nn.Conv2d | torch.nn.Linear | torch.nn.MultiheadAttention):
TypeError: unsupported operand type(s) for |: 'type' and 'type'

Startup then completes but LoRA is not listed under extra networks and cannot be used in generations. Entering a LoRA prompt returns

Skipping unknown extra network: lora

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 5
  • Comments: 24 (2 by maintainers)

Most upvoted comments

@netrunner-exe 's workaround has it temporarily fixed for me

In the colab interface go to /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora and open up lora.py

enter the following on a new line at the top of the file:

from __future__ import annotations

Then restart SD and you should be able to used LoRA again. Note that if you reinstall SD lora.py will reset and you’ll need to add the line again.

!git checkout 64da5c4

where exactly to add it, before git clone line?

Still not working. I am getting the same issuse as @f4cyw

I comminted out the lines after the function self call in lora.py. and it got it working again for me. 238 def lora_apply_weights(self): #: torch.nn.Conv2d | torch.nn.Linear | torch.nn.MultiheadAttention): and 298 def lora_reset_cached_weight(self): # : torch.nn.Conv2d | torch.nn.Linear):