InvokeAI: Slow generation on a fresh M1 install

Describe your environment

  • GPU: mps
  • RAM: 16 Gb
  • CPU arch: arm
  • OS: macOS
  • Python: 3.9.13, miniconda 4.12.0
  • Branch: development (saw an issue where the main branch was slower)
  • Commit: 4104ac62709ab3ea2b5b2b876799a2b77592485f

Description Running python scripts/invoke.py is taking ~120 seconds to generate an image (50 steps). I have read multiple people with the same computer as me MacBook Pro M1 16Gb achieving 30 seconds per image (4x faster)!

This is a fresh installation, how can I debug it?

Thank you very much for this awesome repo all your work 🙌

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 23

Most upvoted comments

My M1 Pro 32GB unified memory (with, IIRC, 16 core GPU) takes 60 to 70 seconds to do 50 steps at 512x512 on k_lms. The unified memory spec seems to be the most impactful thing as far as generation times are concerned, for M1 devices.

I don’t think your results are too far out. It’s possible other users were using lower step counts to get those numbers.

As an aside, step count doesn’t necessarily need to be up at 50 for good results. Have a play with different samplers and steps, even down to single digit steps can be good with some samplers.