diffusers: Error while loading Lora

Describe the bug

According to the guidance , I try to finetune the SD with lora, and use the lora weight, but the issue appears.

Traceback (most recent call last):
  File "AutoPipelineForTex.py", line 17, in <module>
    pipe.unet.load_attn_procs("/code/pytorch_lora_weights.safetensors")
  File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/loaders/unet.py", line 264, in load_attn_procs
    rank = value_dict["lora.down.weight"].shape[0]
KeyError: 'lora.down.weight

When I try the following codes, the above isssue appears。

from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda")
pipeline.load_lora_weights("/sddata/finetune/lora/pokemon/", weight_name="pytorch_lora_weights.safetensors")
image = pipeline("Green pokemon with menacing face").images[0]
image.save("green_pokemon.png")

The safetensors file was created when run the following cmd: python train_text_to_image_lora.py --pretrained_model_name_or_path=$MODEL_NAME --dataset_name=$DATASET_NAME --dataloader_num_workers=8 --resolution=512 --center_crop --random_flip --train_batch_size=1 --gradient_accumulation_steps=4 --max_train_steps=15000 --learning_rate=1e-04 --max_grad_norm=1 --lr_scheduler=“cosine” --lr_warmup_steps=0 --output_dir=${OUTPUT_DIR} --checkpointing_steps=500 --validation_prompt=“A pokemon with blue eyes.” --seed=1337

Reproduction

from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda")
pipeline.load_lora_weights("/sddata/finetune/lora/pokemon/", weight_name="pytorch_lora_weights.safetensors")
image = pipeline("Green pokemon with menacing face").images[0]
image.save("green_pokemon.png")

Logs

No response

System Info

Python: 3.8.10 diffusers: 0.25.0.dev0

Who can help?

No response

About this issue

  • Original URL
  • State: closed
  • Created 7 months ago
  • Comments: 19 (7 by maintainers)

Most upvoted comments

The error can be solved by installing peft. See here: https://colab.research.google.com/gist/sayakpaul/b84e1d6b9484b2467c7fb399d1177cd7/scratchpad.ipynb.

However, without peft being installed, it fails. See here: https://colab.research.google.com/gist/sayakpaul/fa368e2867d06276af6c9ddbda7190be/scratchpad.ipynb. Cc: @younesbelkada @pacman100

I found the reason. My environment already had the peft library installed, but the transformers version was 4.31.0. After upgrading to 4.34.0, the issue was resolved. Thank you for the inspiration you provided.

We don’t have access to /sddata/finetune/lora/pokemon/ and hence cannot debug further without having access to it.

I applied this patch https://github.com/huggingface/diffusers/pull/6119 to train_text_to_image_lora_sdxl.py, and the training process is fine now. However, I encountered an error during inference after training. Could you please help ?

Traceback (most recent call last): File “/data1/cgzhang6/diffusers/examples/text_to_image/train_text_to_image_lora_sdxl.py”, line 1284, in <module> main(args) File “/data1/cgzhang6/diffusers/examples/text_to_image/train_text_to_image_lora_sdxl.py”, line 1237, in main pipeline.load_lora_weights(args.output_dir) File “/data1/cgzhang6/diffusers/src/diffusers/loaders/lora.py”, line 1305, in load_lora_weights self.load_lora_into_unet( File “/data1/cgzhang6/diffusers/src/diffusers/loaders/lora.py”, line 468, in load_lora_into_unet unet.load_attn_procs( File “/usr/local/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py”, line 118, in _inner_fn return fn(*args, **kwargs) File “/data1/cgzhang6/diffusers/src/diffusers/loaders/unet.py”, line 264, in load_attn_procs rank = value_dict[“lora.down.weight”].shape[0] KeyError: 'lora.down.weight`

It would be very helpful if you could at least upload the trained checkpoint so that we can look into the issue. This will help us to look into it faster. I hope you’d understand that. pytorch_lora_weights.zip

Attached are the weights I trained using the reference from diffusers/examples/text_to_image/README_sdxl.md. You can simply unzip the file to obtain the weight files.

Of course, these weights are after merging your patch. Otherwise, the training would throw an error.