peft: Fine-tuning whisper RuntimeError
my codes:
preprocessing_only=False
do_lower_case = False
do_remove_punctuation = False
max_input_length = 30.0
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-large-v2")
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-large-v2", language="zh", task="transcribe")
processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2", language="zh", task="transcribe")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2", load_in_8bit=True, device_map="auto")
device_map = model.hf_device_map.copy()
device_map["model.decoder.embed_tokens"] = model._hf_hook.execution_device
device_map["model.decoder.embed_positions"] = model._hf_hook.execution_device
device_map["proj_out"] = model._hf_hook.execution_device
dispatch_model(model, device_map=device_map)
model.hf_device_map
model.config.suppress_tokens = []
model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language="zh", task="transcribe")
model = prepare_model_for_int8_training(model, output_embedding_layer_name="proj_out")
metric = evaluate.load("cer")
config = LoraConfig(r=32,
lora_alpha=64,
target_modules=".*decoder.*(self_attn|encoder_attn).*(q_proj|v_proj)$",#["q_proj", "v_proj"],
lora_dropout=0.05,
bias="none")
model = get_peft_model(model, config)
model.print_trainable_parameters()
errors:
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /home/ybZhang/miniconda3/envs/whister did not contain libcudart.so as expected! Searching further paths...
warn(msg)
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.0
CUDA SETUP: Detected CUDA version 114
/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU!
warn(msg)
CUDA SETUP: Loading binary /home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda114_nocublaslt.so...
Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning.
trainable params: 10485760 || all params: 1553790720 || trainable%: 0.6748502140622902
{'model.encoder': 0, 'model.decoder.embed_tokens': 0, 'proj_out': 0, 'model.decoder.embed_positions': 0, 'model.decoder.layers.0': 0, 'model.decoder.layers.1': 0, 'model.decoder.layers.2': 0, 'model.decoder.layers.3': 0, 'model.decoder.layers.4': 0, 'model.decoder.layers.5.self_attn': 0, 'model.decoder.layers.5.activation_fn': 0, 'model.decoder.layers.5.self_attn_layer_norm': 0, 'model.decoder.layers.5.encoder_attn.k_proj': 0, 'model.decoder.layers.5.encoder_attn.v_proj': 0, 'model.decoder.layers.5.encoder_attn.q_proj': 0, 'model.decoder.layers.5.encoder_attn.out_proj': 1, 'model.decoder.layers.5.encoder_attn_layer_norm': 1, 'model.decoder.layers.5.fc1': 1, 'model.decoder.layers.5.fc2': 1, 'model.decoder.layers.5.final_layer_norm': 1, 'model.decoder.layers.6': 1, 'model.decoder.layers.7': 1, 'model.decoder.layers.8': 1, 'model.decoder.layers.9': 1, 'model.decoder.layers.10': 1, 'model.decoder.layers.11': 1, 'model.decoder.layers.12': 1, 'model.decoder.layers.13': 1, 'model.decoder.layers.14': 1, 'model.decoder.layers.15': 1, 'model.decoder.layers.16': 1, 'model.decoder.layers.17': 1, 'model.decoder.layers.18': 1, 'model.decoder.layers.19': 1, 'model.decoder.layers.20': 1, 'model.decoder.layers.21': 1, 'model.decoder.layers.22': 1, 'model.decoder.layers.23': 1, 'model.decoder.layers.24': 1, 'model.decoder.layers.25': 1, 'model.decoder.layers.26': 1, 'model.decoder.layers.27': 1, 'model.decoder.layers.28': 1, 'model.decoder.layers.29': 1, 'model.decoder.layers.30': 1, 'model.decoder.layers.31': 1, 'model.decoder.layer_norm': 1}
<datasets.iterable_dataset.IterableDataset object at 0x7f9ed6d4a0a0>
/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
0%| | 0/1500 [00:00<?, ?it/s]Traceback (most recent call last):
File "finetune.py", line 176, in <module>
whisper_finetune(traindir,devdir,outdir)
File "finetune.py", line 171, in whisper_finetune
trainer.train()
File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/trainer.py", line 1633, in train
return inner_training_loop(
File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/trainer.py", line 1902, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/trainer.py", line 2645, in training_step
loss = self.compute_loss(model, inputs)
File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/trainer.py", line 2677, in compute_loss
outputs = model(**inputs)
File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 157, in forward
raise RuntimeError("module must have its parameters and buffers "
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1
0%| |
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 24
@pacman100 As I mentioned in the ticket (P.S. section) I already use the latest of the transformers and I also tried to set these attributes. That results in
RuntimeError: expected scalar type Half but found Float
.