diffusers: Saving error finetuning stable diffusion LoRA

Describe the bug

After the stable diffusion model is fully trained, an error occurs:

Traceback (most recent call last):
  File "train_text_to_image_lora.py", line 872, in <module>
    main()
  File "train_text_to_image_lora.py", line 825, in main
    unet.save_attn_procs(args.output_dir)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/loaders.py", line 273, in save_attn_procs
    weights_no_suffix = weights_name.replace(".bin", "")
AttributeError: 'NoneType' object has no attribute 'replace'

After this error occurs, the weights for the model aren’t saved.

Reproduction

I ran the below command in Google Colab:

accelerate launch train_text_to_image_lora.py --mixed_precision="fp16" --pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5" --dataset_name="Norod78/microsoft-fluentui-emoji-512-whitebg" --caption_column="text" --resolution=512 --train_batch_size=2 --num_train_epochs=1 --output_dir="./sd-model-finetuned-lora" --max_train_steps=1000 --checkpointing_steps=500 --learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 --seed=42

Logs

Traceback (most recent call last):
  File "train_text_to_image_lora.py", line 872, in <module>
    main()
  File "train_text_to_image_lora.py", line 825, in main
    unet.save_attn_procs(args.output_dir)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/loaders.py", line 273, in save_attn_procs
    weights_no_suffix = weights_name.replace(".bin", "")
AttributeError: 'NoneType' object has no attribute 'replace'

System Info

Google colab running the latest version of diffusers.

Is this a bug, or is there something I have to do on my end to fix this?

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 17 (5 by maintainers)

Most upvoted comments

I have a same issue… How to fix it, plz?

you could bypass this bug by modify train_text_to_image_lora.py.

diff --git a/examples/dreambooth/train_dreambooth_lora.py b/examples/dreambooth/train_dreambooth_lora.py
index c9321982..db26a1bf 100644
--- a/examples/dreambooth/train_dreambooth_lora.py
+++ b/examples/dreambooth/train_dreambooth_lora.py
@@ -987,7 +987,7 @@ def main(args):
     accelerator.wait_for_everyone()
     if accelerator.is_main_process:
         unet = unet.to(torch.float32)
-        unet.save_attn_procs(args.output_dir)
+        unet.save_attn_procs(args.output_dir, weights_name='xyz.bin')

         # Final inference
         # Load previous pipeline
@@ -998,7 +998,7 @@ def main(args):
         pipeline = pipeline.to(accelerator.device)

         # load attention processors
-        pipeline.unet.load_attn_procs(args.output_dir)
+        pipeline.unet.load_attn_procs(args.output_dir, weights_name='xyz.bin')

         # run inference
         if args.validation_prompt and args.num_validation_images > 0:

@sayakpaul I tried the work around, but now the error occurs at load time. @patrickvonplaten I will send the LoRA weights as soon as I can.