InvokeAI: [bug]: On linux, defaulting to CPU, can't figure out how to make it use GPU
Is there an existing issue for this?
- I have searched the existing issues
OS
Linux
GPU
cuda
VRAM
8GB
What happened?
Similar to #1763, but on Linux, not Windows.
I’m using InvokeAI on Ubuntu 20.04, installed according to these instructions. Going pretty well, but it’s slow, only running on the CPU. >> Using device_type cpu But I should be able to run it on my GPU – a GTX 1070.
I’ve tried starting the script with invoke.py --precision=float32 as suggested in the readme file for 10xx series GPUs, but it’s still not working.
Does anybody know what I should do in order to get it using the GPU instead?
Screenshots
`>> GFPGAN Initialized
CodeFormer Initialized ESRGAN Initialized Using device_type cpu Loading stable-diffusion-1.4 from models/ldm/stable-diffusion-v1/model.ckpt | LatentDiffusion: Running in eps-prediction mode | DiffusionWrapper has 859.52 M params. | Making attention of type ‘vanilla’ with 512 in_channels | Working with z of shape (1, 4, 32, 32) = 4096 dimensions. | Making attention of type ‘vanilla’ with 512 in_channels | Using more accurate float32 precision Model loaded in 24.44s Setting Sampler to k_lms `
Additional context
No response
Contact Details
7485697@gmail.com
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 1
- Comments: 21 (6 by maintainers)
Well, I’ve tried all kinds of ways to install outside of conda, and it’s just refusing to work. See #2094
Within conda…
That had an interesting result:
That last line in particular is interesting as hell:
AssertionError: Torch not compiled with CUDA enabled
So while within the conda environment, I tried:
pip uninstall torch
and thenpip install torch
And something very cool happened:
Hey, it actually WORKED!
And then I tried running stable-diffusion, and guess what I saw?
>> Using device_type cuda
So there’s my solution for this problem at least! I needed to uninstall and then reinstall torch while in the conda environment, and now I’m running on the GPU!
(And now I’m running out of GPU memory. Sigh. Seems like 8GB isn’t enough to drive 6 screens and run stable-diffusion. But maybe restarting the PC will fix that. Or maybe I’ll manage to use one of those memory optimization hacks out there as a workaround.)
TL;DR: For anyone experiencing this problem, you should try:
pip uninstall torch
and thenpip install torch
You guys can close this issue now if that’s enough of a workaround for you. Might want to at least add something about that in a readme file or something.