diffusers: Error converting LoRA to safetensors/ckpt

Generally speaking, I am not clear on what to do with the output of these LoRA python scripts. I don’t think the output can be natively used by the webuis. Other LoRAs I’ve seen online are usually safetensors. Here is what I did…

I used the provided python script in examples to generate a LoRA:

#!/bin/sh

accelerate launch train_dreambooth_lora.py \
  --pretrained_model_name_or_path=$1  \
  --instance_data_dir=$2 \
  --output_dir=$3 \
  --instance_prompt="a photo of laskajavids" \
  --resolution=512 \
  --train_batch_size=1 \
  --gradient_accumulation_steps=1 \
  --checkpointing_steps=200 \
  --learning_rate=1e-4 \
  --report_to="wandb" \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --max_train_steps=1000 \
  --validation_prompt="A photo of laskajavids by the pool" \
  --validation_epochs=50 \
  --seed="0" \
  --mixed_precision="fp16" \
  --use_8bit_adam

I successfully (or not?) created a LoRA, and it output the following to /output:

checkpoint-1000
checkpoint-200
checkpoint-400
checkpoint-600
checkpoint-800
pytorch_lora_weights.bin

Then, I tried to run scripts\convert_diffusers_to_original_stable_diffusion.py, like so:

 python /diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path /output --checkpoint_path /test.ckpt --use_safetensors

I received the following error:

Traceback (most recent call last):
  File "/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py", line 290, in <module>
    unet_state_dict = torch.load(unet_path, map_location="cpu")
  File "/home/ubuntu/miniconda3/envs/ldm/lib/python3.8/site-packages/torch/serialization.py", line 771, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "/home/ubuntu/miniconda3/envs/ldm/lib/python3.8/site-packages/torch/serialization.py", line 270, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "/home/ubuntu/miniconda3/envs/ldm/lib/python3.8/site-packages/torch/serialization.py", line 251, in __init__
    super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '/output/unet/diffusion_pytorch_model.bin'

Did I miss something when creating the LoRA?

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 7
  • Comments: 43 (16 by maintainers)

Most upvoted comments

I adapted the script by @ignacfetser by adding the CPU support and a simple argparse: https://github.com/harrywang/finetune-sd/blob/main/convert-to-safetensors.py Thanks again @ignacfetser @jndietz @haofanwang for your guidance and help. Now, I can train the models using diffusers and use them in WebUI, cheers!

@sayakpaul @williamberman I have made a PR for this, if it looks good to you, it should be fine to merge.

By other WebUIs was mostly Automatic1111 meant?

We should really think about whether we can somehow integrate diffusers into AUTO1111

@jndietz I have downloaded an auto1111 ‘compatible’ lora model from civitai and cross referenced the keys, also the auto1111 ui logs out the wrong keys when loading the model and that made it easier.

@harrywang I’ve just realized that I used a different version of the lora training script which explains the missing file: https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py I’ll check out the one you’ve linked if I can make the converter work on it as well.

@haofanwang Another question: convert_lora_safetensor_to_diffusers.py converts safetensors to diffusers format. After I trained LoRA model, I have the following in the output folder and checkpoint subfolder: Screenshot 2023-02-20 at 7 07 51 PM Screenshot 2023-02-20 at 7 07 33 PM

How to convert them into safetensors like the ones I downloaded from civitai or huggingface so that I can use this via Automatica1111?

Thanks a lot!!

@harrywang Don’t worry about the file size, you are on the right way, it can work just as other weights in civitai. This is because the default setting for dimension lora layer is quite small. You can find more info at the end of this tutorial, give us a star if it is helpful.

I think the second one has already been ready with ./scripts/convert_original_stable_diffusion_to_diffusers.py, it can convert civitai weights (in safetensors but without lora) into diffusers format.

The third one should be civitai LoRA weights (in safetensors format) to diffusers. I’m actually working on it by diving into stable-diffusion-webui which supports loading from lora+safetensors format, I will provide my script for your reference once I finish.

@sayakpaul

W.r.t https://github.com/huggingface/diffusers/issues/2363 I think there are a couple of different conversion pathways we’re talking about for completness:

  • Diffusers LoRA weights to safetensors
  • civitai weights to diffusers
  • civitai LoRA weights to diffusers

@patrickvonplaten am I missing out on something?

@harrywang so I have just replaced the ‘custom_checkpoint_0.pkl’ with ‘pytorch_model.bin’ in the converter script and it works just fine using in automatic1111

@harrywang could you open a new issue here? as this issue becomes just like a Q&A, I will take a look soon.

No problem. I have created an issue https://github.com/haofanwang/Easy-Lora-Handbook/issues/1 Thanks!

@jndietz Yes. Once the PR merged into diffuser, you can just run the convert script!

You can use LoRA trained embeddings easily with diffusers, see: https://huggingface.co/docs/diffusers/v0.12.0/en/training/lora#lora-support-in-diffusers but I think it’s not so easy to convert them to A1111.

Following