stablediffusion: RuntimeError: Input type (c10::Half) and bias type (float) should be the same

I am using M1 Pro MacBook and I am trying to develop a stablediffusion using mps.

I changed the part about cuda to mps and changed it from ddim.py to float32 because mps did not support float64.

def register_buffer(self, name, attr): if type(attr) == torch.Tensor: if attr.device != torch.device("mps"): attr = attr.to(torch.float32).to(torch.device("mps")) setattr(self, name, attr)

def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) alphas_cumprod = self.model.alphas_cumprod assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)

Since then, this problem has occurred in conv.py

def _conv_forward(self, input: Tensor, weight: Tensor, bias: Optional[Tensor]): if self.padding_mode != 'zeros': return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode), weight, bias, self.stride, _pair(0), self.dilation, self.groups) return F.conv2d(input, weight, bias, self.stride, self.padding, self.dilation, self.groups)

def forward(self, input: Tensor) -> Tensor: return self._conv_forward(input, self.weight, self.bias)

Help me please.

About this issue

Most upvoted comments

when use diffuser pipeline , i meet this,

my solution is :

pipe.to(“cuda”, torch.float16) try : variable.to(device, torch_type) and then it run no problem

I would like to see all the logs and what you have run. Can you show me?

Can I connect with you on Discord? I already sent a request… 😃

Sure! But I might as well talk about it here in case anyone encounters a similar error in the future!

Same error here, it’s just the opposite : RuntimeError: Input type (float) and bias type ([float](c10::Half)) should be the same].

Only when I use Hires fix, any model, any upscaler.

That’s not a VRAM leak because it does it every time I try, even after computer restart.

My command line : COMMANDLINE_ARGS=--upcast-sampling --medvram --xformers --no-half-vae

I was getting these, and eventually deduced what was going on. These errors appeared during the render process when it’d show updates. Instead of updates, these errors were appearing… I did some research. I had been moving the default models folder and using my own. The folder I had didn’t have various files it was looking for. It turns out I needed to merge in the default files to get it working…

Here for example is what I typed in…

cd stable-diffusion-webui
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git SD-hold
cp -r SD-hold/models/* models/
rm -rf SD-hold

That restores the files to the models folder tree, and solved this for me.

I think that error is caused by xformers, so I’d like you to try uninstalling it

I think the reason this happens is because you are using fp16

return F.conv2d(input, weight, bias, self.stride,

RuntimeError: Input type (c10::Half) and bias type (float) should be the same

I also got the above problem @yyahav . I am using ubuntu not macOS

I’d appreciate it if you did that.

@Tps-F It’s faster because I reduced the batch size. Thank you. Are you interested in object detection like ssd(single shot multibox detector) or YOLO? I want trying ssd in m1 Mac but that model used to CUDA how to convert CUDA to MPS?