LLaVA: [Usage] Unable to load LLaVA v1.6 models
Describe the issue
Issue:
When trying to load liuhaotian/llava-v1.6-mistral-7b or liuhaotian/llava-v1.6-34b into my container:
MODEL_PATH = "liuhaotian/llava-v1.6-mistral-7b"
USE_8BIT = False
USE_4BIT = False
DEVICE = "cuda"
def download_llava_model():
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path
model_name = get_model_name_from_path(MODEL_PATH)
load_pretrained_model(
MODEL_PATH, None, model_name, USE_8BIT, USE_4BIT, device=DEVICE
)
Seeing this error:
File "/scripts/llava.py", line 23, in download_llava_model
load_pretrained_model(
File "/root/llava/llava/model/builder.py", line 151, in load_pretrained_model
vision_tower.to(device=device, dtype=torch.float16)
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1145, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 4 more times]
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 820, in _apply
param_applied = fn(param)
^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: Cannot copy out of meta tensor; no data!
About this issue
- Original URL
- State: open
- Created 5 months ago
- Comments: 15 (3 by maintainers)
Bumping vram to 80GB resolved the issue for me. Possibly an OOM error in disguise?