InvokeAI: [bug]: Fresh install always uses cuda for a ROCm compatible AMD GPU

Is there an existing issue for this?

  • I have searched the existing issues

OS

Linux

GPU

amd

VRAM

16GB

What version did you experience this issue on?

v3.0.2rc1

What happened?

I used the install script from the latest release, and selected AMD GPU (with ROCm). The script installs perfectly fine, and then I go to run InvokeAI in the graphical web client, and I get this output:

[2023-08-09 17:25:18,419]::[uvicorn.error]::INFO --> Started server process [16866]
[2023-08-09 17:25:18,419]::[uvicorn.error]::INFO --> Waiting for application startup.
[2023-08-09 17:25:18,420]::[InvokeAI]::INFO --> InvokeAI version 3.0.1
[2023-08-09 17:25:18,420]::[InvokeAI]::INFO --> Root directory = /<myrootpath>/AI/invokeAI
[2023-08-09 17:25:18,421]::[InvokeAI]::INFO --> GPU device = cuda AMD Radeon RX 6800 XT

As you can see, it opens up with cuda AMD Radeon RX 6800 XT. This card works just fine with A111 and ROCm. I’ve also edited the invokeai.yaml file, as I saw that xformers was enabled (isn’t available for AMD cards). Here’s my current config:

InvokeAI:
  Web Server:
    host: 127.0.0.1
    port: 9090
    allow_origins: []
    allow_credentials: true
    allow_methods:
    - '*'
    allow_headers:
    - '*'
  Features:
    esrgan: true
    internet_available: true
    log_tokenization: false
    patchmatch: true
  Memory/Performance:
    always_use_cpu: false
    free_gpu_mem: true
    max_cache_size: 10.0
    max_vram_cache_size: 2.75
    precision: float32
    sequential_guidance: false
    xformers_enabled: false
    tiled_decode: false
  Paths:
    autoimport_dir: autoimport
    lora_dir: null
    embedding_dir: null
    controlnet_dir: null
    conf_path: configs/models.yaml
    models_dir: models
    legacy_conf_dir: configs/stable-diffusion
    db_dir: databases
    outdir: /<myrootpath>/AI/invokeAI/outputs
    use_memory_db: false
  Logging:
    log_handlers:
    - console
    log_format: color
    log_level: info

Of course, cuda doesn’t work with my card and I get all-black output images.

Screenshots

No response

Additional context

This also breaks on the manual install and runs with cuda.

Contact Details

No response

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Reactions: 1
  • Comments: 17 (3 by maintainers)

Most upvoted comments

I’m having a similar problem using a 7900 XTX even after I installed Invoke with pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/nightly/rocm5.6

Invoke still fully installs only the Nvidia stack, then launches in cpu only mode

export HSA_OVERRIDE_GFX_VERSION=10.3.0

Before running ./invoke.sh appears to have fixed the issue for me on my 6750 xt.

7 Series AMD cards may need this:

export HSA_OVERRIDE_GFX_VERSION=11.0.0

Also of note, tweaking the version numbers of the above suggested command runs without errors: pip install "torch==2.1.2+rocm5.6" "torchvision==0.16.2+rocm5.6" "fsspec==2023.10.0.0" "requests~=2.28.2" --force-reinstall --extra-index-url https://download.pytorch.org/whl/rocm5.6 I don’t know enough to be able to say if those versions are a good idea, but they don’t error for me on invoke 3.6.0rc6.

disclaimer: I don’t have any idea what I’m doing

I have tried to manually modify the create_install.sh and associated files to remove CUDA. I have tried to force ROCM install. I still haven’t figured things out. I wish they had the old requirements.txt file. At this point InvokeAI is useless for AMD GPUs.

I tried to update installer.py with the following:

    # device can be one of: "cuda", "rocm", "cpu", "idk"
    device = graphical_accelerator()
    device = "rocm"

    url = None
    optional_modules = "[onnx]"
    if OS == "Linux":
        if device == "rocm":
            url = "https://download.pytorch.org/whl/nightly/rocm5.7"
        elif device == "cpu":
            url = "https://download.pytorch.org/whl/cpu"

But it still tries to install CUDA dependencies.