llama.cpp: AMD ROCm problem: GPU is constantly running at 100%
Following Problem:
When I run “ollama run Mistral” the GPU is constantly running at 100% and consuming 100 watt But the Chat is working fine, without any Problems.
The GPU is behaving strange: Before I run “ollama run Mistral” - the GPU utilization is 0% and power: 0 watt and memory 0 MB. After I run “ollama run Mistral” - the GPU utilization is 100% and power: 100 watt and memory 5.000 MB. When I run a Chat Prompt - the GPU utilization is 100% and power: 300 watt and memory 5.000 MB. After I close ollama chat - the GPU utilization is 100% and power: 100 watt and memory 5.000 MB. After I close ollama serve - the GPU utilization is 0% and power: 0 watt and memory 0 MB.
Additional Information about GPU and Memmory Speed Before I run “ollama run Mistral” - GPUSpeed: 50Mhz MemorySpeed: 90Mhz. After I run “ollama run Mistral” - GPUSpeed: 3000Mhz MemorySpeed: 90Mhz. When I run a Chat Prompt - GPUSpeed: 3000Mhz MemorySpeed: 1200Mhz. After I close ollama chat - GPUSpeed: 3000Mhz MemorySpeed: 90Mhz. After I close ollama serve - GPUSpeed: 50Mhz MemorySpeed: 90Mhz.
ollama version: 0.1.22 ROCm Verion: 6.0 GPU: 7900 XTX System: Ubuntu 22.04 CPU: 7950X RAM: 64GB
When I start ollama serve: ollama serve
2024/02/02 05:11:24 images.go:857: INFO total blobs: 7
2024/02/02 05:11:24 images.go:864: INFO total unused blobs removed: 0
2024/02/02 05:11:24 routes.go:950: INFO Listening on 127.0.0.1:11434 (version 0.1.22)
2024/02/02 05:11:24 payload_common.go:106: INFO Extracting dynamic libraries...
2024/02/02 05:11:25 payload_common.go:145: INFO Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v5 rocm_v6 cpu]
2024/02/02 05:11:25 gpu.go:94: INFO Detecting GPU type
2024/02/02 05:11:25 gpu.go:236: INFO Searching for GPU management library libnvidia-ml.so
2024/02/02 05:11:25 gpu.go:282: INFO Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.525.147.05]
2024/02/02 05:11:25 gpu.go:294: INFO Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.525.147.05: nvml vram init failure: 9
2024/02/02 05:11:25 gpu.go:236: INFO Searching for GPU management library librocm_smi64.so
2024/02/02 05:11:25 gpu.go:282: INFO Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.6.0.60000 /opt/rocm-6.0.0/lib/librocm_smi64.so.6.0.60000]
2024/02/02 05:11:25 gpu.go:109: INFO Radeon GPU detected
ollama run Mistral
[GIN] 2024/02/02 - 07:36:56 | 200 | 32.421µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/02/02 - 07:36:56 | 200 | 723.312µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/02/02 - 07:36:56 | 200 | 284.482µs | 127.0.0.1 | POST "/api/show"
2024/02/02 07:36:56 cpu_common.go:11: INFO CPU has AVX2
loading library /tmp/ollama726758615/rocm_v6/libext_server.so
2024/02/02 07:36:56 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama726758615/rocm_v6/libext_server.so
2024/02/02 07:36:56 dyn_ext_server.go:145: INFO Initializing llama server
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 ROCm devices:
Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /home/user/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 3.83 GiB (4.54 BPW)
llm_load_print_meta: general.name = mistralai
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.22 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: ROCm0 buffer size = 3847.55 MiB
llm_load_tensors: CPU buffer size = 70.31 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: ROCm0 KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: ROCm_Host input buffer size = 12.01 MiB
llama_new_context_with_model: ROCm0 compute buffer size = 156.00 MiB
llama_new_context_with_model: ROCm_Host compute buffer size = 8.00 MiB
llama_new_context_with_model: graph splits (measure): 3
2024/02/02 07:37:15 dyn_ext_server.go:156: INFO Starting llama main loop
[GIN] 2024/02/02 - 07:37:15 | 200 | 18.899618958s | 127.0.0.1 | POST "/api/chat"
Same behavior when I run the llama2 Model
When I Run Mistral in oobabooga/text-generation-webui and using there Transformers, all working fine. The GPU is only at 100% active if I chat, else at 0%. But when I using there llama.cpp, the GPU behave the same like in ollama.
It seems like an llama.cpp problem
About this issue
- Original URL
- State: open
- Created 5 months ago
- Comments: 19 (5 by maintainers)
llama.cpp can set this internally, but it wouldn’t make sense for them to do so. It’s a workaround for a bug that happens on specific configurations, the env var doesn’t mean “ROCm is broken if you don’t do this”, it means “limit the hardware queues to 1”, so it has all kinds of implications that llama.cpp has no business in.
It’s 100% a bug with ROCm 6.0, and the most llama.cpp should do is add a short warning in the documentation for ROCm 6.0 or add it as a comment to a troubleshooting section.
this is the behavior if you don’t Chat, only start the llama.cpp app