llama.cpp: [METAL] GPU Inference fails due to buffer error (buffer "data" size is larger than buffer maximum)

I own a Macbook Pro M2 with 32GB memory and try to do inference with a 33B model. Without Metal (or -ngl 1 flag) this works fine and 13B models also work fine both with or without METAL. There is sufficient free memory available.

Inference always fails with the error: ggml_metal_add_buffer: buffer 'data' size 18300780544 is larger than buffer maximum of 17179869184

Prerequisites

Please answer the following questions for yourself before submitting an issue.

Expected Behavior

I own a Mac Pro M2 with 32GB memory and try to do inference with a 33B model. Without Metal this works fine and 13B models also work fine with or without METAL. There is sufficient free memory available.

Current Behavior

> llama.cpp git:(master) ./main -m ~/dev2/text-generation-webui/models/guanaco-33B.ggmlv3.q4_0.bin -p "Building a website can be done in 10 simple steps:" -n 512 -ngl 1

main: build = 661 (fa84c4b)
main: seed  = 1686556467
llama.cpp: loading model from /Users/jp/dev2/models/guanaco-33B.ggmlv3.q4_0.bin
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 512
llama_model_load_internal: n_embd     = 6656
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 52
llama_model_load_internal: n_layer    = 60
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 17920
llama_model_load_internal: n_parts    = 1
llama_model_load_internal: model size = 30B
llama_model_load_internal: ggml ctx size =    0,13 MB
llama_model_load_internal: mem required  = 19756,66 MB (+ 3124,00 MB per state)
.
llama_init_from_file: kv self size  =  780,00 MB
ggml_metal_init: allocating
ggml_metal_init: using MPS
ggml_metal_init: loading '/Users/jp/dev2/llama.cpp/ggml-metal.metal'
ggml_metal_init: loaded kernel_add                            0x139809af0
ggml_metal_init: loaded kernel_mul                            0x13980a210
ggml_metal_init: loaded kernel_mul_row                        0x138f05fa0
ggml_metal_init: loaded kernel_scale                          0x138f06430
ggml_metal_init: loaded kernel_silu                           0x13980a610
ggml_metal_init: loaded kernel_relu                           0x13980ac50
ggml_metal_init: loaded kernel_gelu                           0x138f06830
ggml_metal_init: loaded kernel_soft_max                       0x126204210
ggml_metal_init: loaded kernel_diag_mask_inf                  0x126204c70
ggml_metal_init: loaded kernel_get_rows_f16                   0x138f07030
ggml_metal_init: loaded kernel_get_rows_q4_0                  0x138f07830
ggml_metal_init: loaded kernel_get_rows_q4_1                  0x13980b4b0
ggml_metal_init: loaded kernel_get_rows_q2_k                  0x13980bcb0
ggml_metal_init: loaded kernel_get_rows_q4_k                  0x138f07ed0
ggml_metal_init: loaded kernel_get_rows_q6_k                  0x138f08690
ggml_metal_init: loaded kernel_rms_norm                       0x138f08dd0
ggml_metal_init: loaded kernel_mul_mat_f16_f32                0x138f09870
ggml_metal_init: loaded kernel_mul_mat_q4_0_f32               0x13980c750
ggml_metal_init: loaded kernel_mul_mat_q4_1_f32               0x13980d050
ggml_metal_init: loaded kernel_mul_mat_q2_k_f32               0x138f09fd0
ggml_metal_init: loaded kernel_mul_mat_q4_k_f32               0x138f0a8d0
ggml_metal_init: loaded kernel_mul_mat_q6_k_f32               0x13980da40
ggml_metal_init: loaded kernel_rope                           0x126205440
ggml_metal_init: loaded kernel_cpy_f32_f16                    0x13980e6b0
ggml_metal_init: loaded kernel_cpy_f32_f32                    0x13980f130
ggml_metal_add_buffer: buffer 'data' size 18300780544 is larger than buffer maximum of 17179869184
llama_init_from_file: failed to add buffer
llama_init_from_gpt_params: error: failed to load model '/Users/jp/dev2/models/guanaco-33B.ggmlv3.q4_0.bin'
main: error: unable to load model

Is this known/expected and are there any workarounds? The mentioned “buffer maximum” of 17179869184 stay the same regardless of how much memory is free.

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 4
  • Comments: 20 (14 by maintainers)

Most upvoted comments

That’s crazy! So the whole concept of using a large amount of unified memory for GPU/Metal is flawed? or is there something we can do? Maybe people should also send this to Apple to see if they can update something in the metal framework or am I missing a key piece of understanding here?

@CyborgArmy83 - very likely

I just created #1825 to capture the code I’ve written so far. As I note there, buffer splitting seems to be working for smaller models, but there’s still something I’m missing that is causing issues when I try to split up (e.g.) a 65B model.

I’ll keep poking at this after hours as I can, but if anyone spots anything, let me know!

The fix to ggml_metal_add_buffer: buffer 'data' size 18435457024 is larger than buffer maximum of 17179869184 is discussed here: https://github.com/ggerganov/llama.cpp/pull/1696#issuecomment-1585667275

I think the proposed solution there has to work, but not 100% sure. I’ll push a fix soon - it’s just low prio atm + want to see if someone would figure it out

Does #1817 fix your issue?

@ikawrakow Not working in my case. Still shows similar error message.

ggml_metal_add_buffer: buffer 'data' size 18435457024 is larger than buffer maximum of 17179869184
llama_init_from_file: failed to add buffer
llama_init_from_gpt_params: error: failed to load model 'zh-alpaca-models/33B/ggml-model-q4_0.bin'
main: error: unable to load model

Forgive me for my novelty here. Is this the reason why my M2 Max 64GB system is not able to load larger than 30B models and use it with GPU acceleration?

@kiltyj any progress you’ve made- please upload/make a branch for us to continue work on

@TheBloke Then your error is probably unrelated, the 7B should fit into GPU memory, but I don’t know anything at all about non-apple GPUs and metal. Thanks for your great work btw 😃.

You’re right, I should raise a separate issue!

Looking at the code, it seems the model is being passed to Metal as a single, no-copy buffer. My guess is that the change required to split the model into 2 or more buffers to circumvent the maxBufferLength restriction is significant, but perhaps @ggerganov can chime in.

I just tried the 33B model on my laptop and it works fine:

./bin/main -m q4k_30B.bin -p "I believe the meaning of life is" -c 2048 -n 512 --ignore-eos -n 256 -s 1234 -t 8 -ngl 1
main: build = 677 (a6812a1)
main: seed  = 1234
llama.cpp: loading model from q4k_30B.bin
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 6656
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 52
llama_model_load_internal: n_layer    = 60
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 15 (mostly Q4_K - Medium)
llama_model_load_internal: n_ff       = 17920
llama_model_load_internal: n_parts    = 1
llama_model_load_internal: model size = 30B
llama_model_load_internal: ggml ctx size =    0.13 MB
llama_model_load_internal: mem required  = 21015.59 MB (+ 3124.00 MB per state)
.
llama_init_from_file: kv self size  = 3120.00 MB
ggml_metal_init: allocating
ggml_metal_init: using MPS
XXXXXXXXXXXXXXXXXXXX Device max buffer length is 36 GiB
ggml_metal_init: loading '/Users/iwan/other/llama.cpp/build/bin/ggml-metal.metal'
ggml_metal_init: loaded kernel_add                            0x124b05e20
ggml_metal_init: loaded kernel_mul                            0x124b06560
ggml_metal_init: loaded kernel_mul_row                        0x124b06b10
ggml_metal_init: loaded kernel_scale                          0x124b06fb0
ggml_metal_init: loaded kernel_silu                           0x124b07450
ggml_metal_init: loaded kernel_relu                           0x124b078f0
ggml_metal_init: loaded kernel_gelu                           0x124b07d90
ggml_metal_init: loaded kernel_soft_max                       0x124b083c0
ggml_metal_init: loaded kernel_diag_mask_inf                  0x124b089a0
ggml_metal_init: loaded kernel_get_rows_f16                   0x124b08fa0
ggml_metal_init: loaded kernel_get_rows_q4_0                  0x124b095a0
ggml_metal_init: loaded kernel_get_rows_q4_1                  0x124b09d10
ggml_metal_init: loaded kernel_get_rows_q2_k                  0x124b0a310
ggml_metal_init: loaded kernel_get_rows_q3_k                  0x124b0a910
ggml_metal_init: loaded kernel_get_rows_q4_k                  0x124b0af10
ggml_metal_init: loaded kernel_get_rows_q5_k                  0x124b0b510
ggml_metal_init: loaded kernel_get_rows_q6_k                  0x124b0bb10
ggml_metal_init: loaded kernel_rms_norm                       0x124b0c140
ggml_metal_init: loaded kernel_mul_mat_f16_f32                0x124b0c920
ggml_metal_init: loaded kernel_mul_mat_q4_0_f32               0x124b0d0f0
ggml_metal_init: loaded kernel_mul_mat_q4_1_f32               0x124b0d750
ggml_metal_init: loaded kernel_mul_mat_q2_k_f32               0x124b0ddb0
ggml_metal_init: loaded kernel_mul_mat_q3_k_f32               0x124b0e430
ggml_metal_init: loaded kernel_mul_mat_q4_k_f32               0x124b0ec10
ggml_metal_init: loaded kernel_mul_mat_q5_k_f32               0x124b0f270
ggml_metal_init: loaded kernel_mul_mat_q6_k_f32               0x124b0f8d0
ggml_metal_init: loaded kernel_rope                           0x124b10140
ggml_metal_init: loaded kernel_cpy_f32_f16                    0x124b10b50
ggml_metal_init: loaded kernel_cpy_f32_f32                    0x124b11360
ggml_metal_add_buffer: allocated 'data            ' buffer, size = 18711.91 MB
ggml_metal_add_buffer: allocated 'eval            ' buffer, size =  1280.00 MB
ggml_metal_add_buffer: allocated 'kv              ' buffer, size =  3122.00 MB
ggml_metal_add_buffer: allocated 'scr0            ' buffer, size =   512.00 MB
ggml_metal_add_buffer: allocated 'scr1            ' buffer, size =   512.00 MB

system_info: n_threads = 8 / 12 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | 
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 2048, n_batch = 512, n_predict = 256, n_keep = 0


 I believe the meaning of life is to learn how to be happy. Happiness is a choice. It’s not something that you will fall into or that you get by accident. So, I choose to be happy!
“It took me four years to paint like Raphael, but a lifetime to paint like a child.” Pablo Picasso
I am so very excited about my newest book; it comes out August 2014! It is called “The Happy Book” and it’s all about how happiness is our birthright. As the title implies, there are no sad or scary stories in this book (which is unusual for me!)
This book begins with a little girl who is not happy because she doesn’t like rainbows, sunshine, butterflies… and other things that are supposed to make you happy. But then her grandmother tells her about the happiness seed that lives inside her heart—and the story blossoms into an inspiring and enlightening tale of how to “grow” your own happiness.
I had a wonderful time creating this book. I got so excited when one page was finished, I couldn’t wait to begin the next! It will be available online and at fine books
llama_print_timings:        load time =  1796.08 ms
llama_print_timings:      sample time =   169.78 ms /   256 runs   (    0.66 ms per token)
llama_print_timings: prompt eval time =  1274.83 ms /     8 tokens (  159.35 ms per token)
llama_print_timings:        eval time = 26968.65 ms /   255 runs   (  105.76 ms per token)
llama_print_timings:       total time = 28962.71 ms

Update: Q2_K seems to be the only one that works with -ngl 1 for 33B models. Tested under M1 Max @ 32GB RAM for Chinese Alpaca 33B models.

Model Size -t 8 -t 8 -ngl 1
q4_0 18.4 GB 170ms/tok failed
q3_K_M 15.7 GB 216ms/tok not implemented
q2_K 13.7 GB 178ms/tok 128ms/tok

Note: Reported numbers are based on eval_time.

Does #1817 fix your issue?

no, stays the same.

Does #1817 fix your issue?