diffusers: Generation using StableDiffusionPipeline with torch_dtype=torch.float16 and mps crashes the kernel on Mac M1

Describe the bug

If I add torch_dtype=torch.float16 to any model the Python kernel stops/crashes when trying to generate images, works well when I don’t add that setting. I have a Mac M1 so I run with the .to(“mps”).

Reproduction

Recreate bug code

from diffusers import StableDiffusionPipeline import torch

MODEL_VERSION = “runwayml/stable-diffusion-v1-5”

pytorch_pipe = StableDiffusionPipeline.from_pretrained(MODEL_VERSION, torch_dtype=torch.float16 # Remove this line and it works ).to(“mps”)

image = pytorch_pipe( prompt=“photo of a man standing next to a wall”, width=512, height=512, num_inference_steps=50, num_images_per_prompt=1, guidance_scale=7 )

Logs

No response

System Info

  • diffusers version: 0.20.0.dev0 (installed diffusers latest dev version to see if it was working there, but issue is also present in latest official release)

  • Mac M1 Pro with 16Gb memory

  • Platform: macOS-13.5-arm64-arm-64bit

  • Python version: 3.11.4

  • PyTorch version (GPU?): 2.0.1 (False)

  • Huggingface_hub version: 0.16.4

  • Transformers version: 4.31.0

  • Accelerate version: 0.21.0

  • xFormers version: not installed

  • Using GPU in script?: using .to(“mps”)

  • Using distributed or parallel set-up in script?: <fill in>

Who can help?

No response

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Reactions: 2
  • Comments: 17 (7 by maintainers)

Most upvoted comments

i believe this may be addresseed for SDXL pipelines in #7447 though i never directly ran into the issue during inference, i suppose it’s possible. for me, it impacted training.