vllm: Recent vLLMs ask for too much memory: ValueError: No available memory for the cache blocks. Try increasing `gpu_memory_utilization` when initializing the engine.
Since vLLM 0.2.5, we can’t even run llama-2 70B 4bit AWQ on 4*A10G anymore, have to use old vLLM. Similar problems even trying to be two 7b models on 80B A100.
For small models, like 7b with 4k tokens, vLLM fails for “cache blocks” even though alot more memory is left.
E.g. building docker image with cuda 11.8 and vllm 0.2.5 or 0.2.6 and running like:
port=5001
tokens=8192
docker run -d \
--runtime=nvidia \
--gpus '"device=1"' \
--shm-size=10.24gb \
-p $port:$port \
--entrypoint /h2ogpt_conda/vllm_env/bin/python3.10 \
-e NCCL_IGNORE_DISABLED_P2P=1 \
-v /etc/passwd:/etc/passwd:ro \
-v /etc/group:/etc/group:ro \
-u `id -u`:`id -g` \
-v "${HOME}"/.cache:/workspace/.cache \
--network host \
gcr.io/vorvan/h2oai/h2ogpt-runtime:0.1.0 -m vllm.entrypoints.openai.api_server \
--port=$port \
--host=0.0.0.0 \
--model=defog/sqlcoder2 \
--seed 1234 \
--trust-remote-code \
--max-num-batched-tokens $tokens \
--max-model-len=$tokens \
--gpu-memory-utilization 0.4 \
--download-dir=/workspace/.cache/huggingface/hub &>> logs.vllm_server.sqlcoder2.txt
port=5002
tokens=4096
docker run -d \
--runtime=nvidia \
--gpus '"device=1"' \
--shm-size=10.24gb \
-p $port:$port \
--entrypoint /h2ogpt_conda/vllm_env/bin/python3.10 \
-e NCCL_IGNORE_DISABLED_P2P=1 \
-v /etc/passwd:/etc/passwd:ro \
-v /etc/group:/etc/group:ro \
-u `id -u`:`id -g` \
-v "${HOME}"/.cache:/workspace/.cache \
--network host \
gcr.io/vorvan/h2oai/h2ogpt-runtime:0.1.0 -m vllm.entrypoints.openai.api_server \
--port=$port \
--host=0.0.0.0 \
--model=NumbersStation/nsql-llama-2-7B \
--seed 1234 \
--trust-remote-code \
--max-num-batched-tokens $tokens \
--gpu-memory-utilization 0.6 \
--max-model-len=$tokens \
--download-dir=/workspace/.cache/huggingface/hub &>> logs.vllm_server.nsql7b.txt
works. However, if the 2nd model was to have 0.4, one gets:
Traceback (most recent call last):
File "/h2ogpt_conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/h2ogpt_conda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/h2ogpt_conda/vllm_env/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 729, in <module>
engine = AsyncLLMEngine.from_engine_args(engine_args)
File "/h2ogpt_conda/vllm_env/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 496, in from_engine_args
engine = cls(parallel_config.worker_use_ray,
File "/h2ogpt_conda/vllm_env/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 269, in __init__
self.engine = self._init_engine(*args, **kwargs)
File "/h2ogpt_conda/vllm_env/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 314, in _init_engine
return engine_class(*args, **kwargs)
File "/h2ogpt_conda/vllm_env/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 113, in __init__
self._init_cache()
File "/h2ogpt_conda/vllm_env/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 227, in _init_cache
raise ValueError("No available memory for the cache blocks. "
ValueError: No available memory for the cache blocks. Try increasing `gpu_memory_utilization` when initializing the engine.
However, with 0.6 util from before, here is what GPU looks like:
Sun Dec 24 02:45:53 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA A100 80GB PCIe Off | 00000000:00:06.0 Off | 0 |
| N/A 43C P0 72W / 300W | 70917MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA A100 80GB PCIe Off | 00000000:00:07.0 Off | 0 |
| N/A 45C P0 66W / 300W | 49136MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 6232 C /h2ogpt_conda/vllm_env/bin/python3.10 70892MiB |
| 1 N/A N/A 6966 C /h2ogpt_conda/vllm_env/bin/python3.10 32430MiB |
| 1 N/A N/A 7685 C /h2ogpt_conda/vllm_env/bin/python3.10 16670MiB |
Ignore GPU=0.
So 0.6 util is 17GB, why would 0.4 util out of 80GB be a problem?
About this issue
- Original URL
- State: open
- Created 6 months ago
- Reactions: 5
- Comments: 40 (7 by maintainers)
Commits related to this issue
- Revert https://github.com/vllm-project/vllm/pull/2031/files for https://github.com/vllm-project/vllm/issues/2248 — committed to h2oai/vllm by pseudotensor 5 months ago
- use h2ogpt version of vllm with attempt to fix https://github.com/vllm-project/vllm/issues/2248 using https://github.com/h2oai/vllm — committed to Pandinosaurus/h2ogpt by pseudotensor 5 months ago
- Add how to run 4*A10G on AWS using LLaMa-2 70B AWQ after vllm changes, for Issue https://github.com/vllm-project/vllm/issues/2248 — committed to h2oai/h2ogpt by pseudotensor 5 months ago
Yet another version of this problem is that 01-ai/Yi-34B-Chat used to work perfectly fine on 4*H100 80GB when run like:
But now it doesn’t since 0.2.5+ including 0.2.7. Get instead:
When can we expect a fix? It seems a pretty serious bug.
BTW, curiously, I ran the same exact command a second time (both times nothing on the GPUs) and second time didn’t hit the error. So maybe there is a race in the memory size detection in vLLM.
We are having the exact same issue on our end, cache usage grows and consumes more than the allocated gpu_memory_utilization, even by using
enforce-eager
.We had the same problem before with 0.2.1
I dived in a bit and here are some findings:
My temporary solution is as follows:
torch.cuda.empty_cache()
inworker.py
before the linefree_gpu_memory, total_gpu_memory = torch.cuda.mem_get_info()
. This removes the impact of fragmentation.empty_cache()
also removes the impact of intermediate tensors when running forward pass. As a result, tuning the--gpu-memory-utilization
becomes more important, as we have to use it to cover the forward intermediate tensors. Here are my testing results with different util values:having the same issue on cuda 11.8 and vllm 0.2.5 and 0.2.6
FYI @pseudotensor
I’ve tested the memory footprint of
0.2.4
and0.2.7
and this is my finding:0.2.4
and0.2.7
consume exactly the same amount of memory(measuring by old and new way) by a model.nccl
version doesn’t change memory consumption significantly (~10MB).--enforce-eager
the memory consumption is a little bit lower.PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
helps and also makes execution w and w\o--enforce-eager
identical. I’m not sure how stable it’s as it’s marked as experimental.gpu_memory_utilization
we can get the original behavior as I don’t see an increase in memory consumption.Reverting avoided the title message, but it went GPU OOM unlike 0.2.4 with same long-context query. FYI @sh1ng
@Snowdar @hanzhi713 et al. I want to be clear again. The primary issue is that even single sharded model across GPUs no longer works. Forget about multiple models per GPU for now.
That is, on AWS 4*A10G, vLLM 0.2.4 and lower work perfectly fine and leave plenty of room without any failure.
However, on 0.2.5+ no matter any settings of gpu utilitization etc., never will llama 70B AWQ model fit on the 4 A10G while before it was perfectly fine (even under heavy use for long periods).
same here