fast-stable-diffusion: Can no longer load SD1.5 models in tester
When trying to load either of my two stable diffusion 1.5 training sessions in the “Test the trained model” step, I get the following error. I have tried deleting the sd folder from my gdrive to get a fresh install of the UI, but the error persists.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Downloading100% 961k/961k [00:00<00:00, 1.88MB/s]
Downloading100% 4.52k/4.52k [00:00<00:00, 2.93MB/s]
Failed to create model quickly; will retry using slow method.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
loading stable diffusion model: TypeError
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_models.py", line 338, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 563, in __init__
self.instantiate_cond_stage(cond_stage_config)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/encoders/modules.py", line 99, in __init__
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py", line 1777, in from_pretrained
return cls._from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py", line 1932, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/clip/tokenization_clip.py", line 328, in __init__
with open(merges_file, encoding="utf-8") as merges_handle:
TypeError: expected str, bytes or os.PathLike object, not NoneType
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py", line 74, in initialize
modules.sd_models.load_model()
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_models.py", line 341, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 563, in __init__
self.instantiate_cond_stage(cond_stage_config)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/encoders/modules.py", line 99, in __init__
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py", line 1777, in from_pretrained
return cls._from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py", line 1932, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/clip/tokenization_clip.py", line 328, in __init__
with open(merges_file, encoding="utf-8") as merges_handle:
TypeError: expected str, bytes or os.PathLike object, not NoneType
Stable diffusion model failed to load, exiting
It seems to work fine when testing my SD v2 models however
About this issue
- Original URL
- State: open
- Created a year ago
- Comments: 15 (5 by maintainers)
Just tested the update you pushed and it worked perfectly cheers.
working on a fix
same , delete the sd folder and use the latest version of the drembooth notebook and the error still appears
There is definitely an issue here, I’ve tried multiple times with the latest notebook and by removing the sd folder (which usually always does the trick)
I trained a new 1.5 model just to be sure and still errors out. When trying to load that same recently trained 1.5 through the fast-stablediffusion instead of fast-dreambooth notebook the model works perfectly.
Also I’ve seen 2 different errors one occurred right after training and trying to test:
TypeError: expected str, bytes or os.PathLike object, not NoneTypeand the other happens when trying to load an old 1.5 model:Can't load tokenizer for 'openai/clip-vit-large-patch14'.All v2 models working on fast-dreambooth No 1.5 models working on fast-dreambooth All models working on fast-stablediffusion-auto I might still be doing something wrong but tried everything I can think of.
I have the same problem