diffusers: TypeError: StableDiffusionXLReferencePipeline.__call__..hacked_DownBlock2D_forward() got an unexpected keyword argument 'scale'
Describe the bug
TypeError Traceback (most recent call last) Cell In[1], line 18 16 pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) 17 seed = torch.manual_seed(10240) —> 18 result_img = pipe(ref_image=style_image, 19 prompt=“1girl”, 20 generator=seed, 21 num_inference_steps=20, 22 reference_attn=True, 23 reference_adain=True).images[0] 24 result_img
File /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): –> 115 return func(*args, **kwargs)
File ~/.cache/huggingface/modules/diffusers_modules/git/stable_diffusion_xl_reference.py:738, in StableDiffusionXLReferencePipeline.call(self, prompt, prompt_2, ref_image, height, width, num_inference_steps, denoising_end, guidance_scale, negative_prompt, negative_prompt_2, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds, output_type, return_dict, callback, callback_steps, cross_attention_kwargs, guidance_rescale, original_size, crops_coords_top_left, target_size, attention_auto_machine_weight, gn_auto_machine_weight, style_fidelity, reference_attn, reference_adain) 734 ref_xt = self.scheduler.scale_model_input(ref_xt, t) 736 MODE = “write” –> 738 self.unet( 739 ref_xt, 740 t, 741 encoder_hidden_states=prompt_embeds, 742 cross_attention_kwargs=cross_attention_kwargs, 743 added_cond_kwargs=added_cond_kwargs, 744 return_dict=False, 745 ) 747 # predict the noise residual 748 MODE = “read”
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don’t have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.10/dist-packages/diffusers/models/unet_2d_condition.py:966, in UNet2DConditionModel.forward(self, sample, timestep, encoder_hidden_states, class_labels, timestep_cond, attention_mask, cross_attention_kwargs, added_cond_kwargs, down_block_additional_residuals, mid_block_additional_residual, encoder_attention_mask, return_dict) 956 sample, res_samples = downsample_block( 957 hidden_states=sample, 958 temb=emb, (…) 963 **additional_residuals, 964 ) 965 else: –> 966 sample, res_samples = downsample_block(hidden_states=sample, temb=emb, scale=lora_scale) 968 if is_adapter and len(down_block_additional_residuals) > 0: 969 sample += down_block_additional_residuals.pop(0)
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don’t have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: StableDiffusionXLReferencePipeline.call.<locals>.hacked_DownBlock2D_forward() got an unexpected keyword argument ‘scale’
Reproduction
import torch from PIL import Image from diffusers.utils import load_image from diffusers import DiffusionPipeline, AutoencoderTiny from diffusers.schedulers import UniPCMultistepScheduler style_image = load_image(“imgs/沙滩动漫.png”).convert(“RGB”)
pipe = DiffusionPipeline.from_pretrained( “stabilityai/stable-diffusion-xl-base-1.0”, custom_pipeline=“stable_diffusion_xl_reference”, torch_dtype=torch.float16, use_safetensors=True, variant=“fp16”, safety_checker=None, local_files_only=True,).to(‘cuda:0’)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) seed = torch.manual_seed(10240) result_img = pipe(ref_image=style_image, prompt=“1girl”, generator=seed, num_inference_steps=20, reference_attn=True, reference_adain=True).images[0] result_img
Logs
No response
System Info
diffusers
version: 0.21.0- Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Huggingface_hub version: 0.16.4
- Transformers version: 4.33.1
- Accelerate version: 0.21.0
- xFormers version: 0.0.20
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Who can help?
No response
About this issue
- Original URL
- State: closed
- Created 10 months ago
- Reactions: 2
- Comments: 15 (3 by maintainers)
Found a workaround, we can pass
scale=None
as arg for bothhacked_DownBlock2D_forward()
andhacked_UpBlock2D_forward()
inStableDiffusionReferencePipeline