transformers: RuntimeError: Caught RuntimeError in replica 0 on device 0
System Info
transformers version -> 4.33 python version -> 3.10.6
I try to finetune this huggingface model : NousResearch/Llama-2-70b-chat-hf With this huggingface dataset : mlabonne/guanaco-llama2-1k
None of those previous answers helped me : https://github.com/huggingface/transformers/issues/23754 -> i didn’t understood the error https://github.com/huggingface/transformers/issues/6855 -> I reduce the batch size by 1 and i used 4 A100 GPU, no result
Who can help?
Who can help -> text models: @ArthurZucker and @younesbelkada
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examplesfolder (such as GLUE/SQuAD, …) - My own task or dataset (give details below)
Reproduction
1.Deploy a server RunPod with 4 A100 GPU (7.96$ per hour) with the pytorch image “RunPod Pytorch 2.0.1”
- Install those libraries :
!pip install transformers[sentencepiece]
!pip install yolk3k
!yolk -V trl
!pip install -q accelerate==0.21.0 peft==0.4.0 bitsandbytes==0.40.2 transformers==4.31.0 trl==0.7.1
!pip install scipy tensorboardX
!pip install sentencepiece
- Run this code :
import os
import torch
from datasets import load_dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging,
)
from peft import LoraConfig, PeftModel
from trl import SFTTrainer
model_name = "NousResearch/Llama-2-70b-chat-hf"
dataset_name = "mlabonne/guanaco-llama2-1k"
new_model = "Llama-2-70b-chat-hf-miniguanaco"
lora_r = 64
lora_alpha = 16
lora_dropout = 0.1
use_4bit = True
bnb_4bit_compute_dtype = "float16"
bnb_4bit_quant_type = "nf4"
use_nested_quant = False
output_dir = "./results"
num_train_epochs = 1
fp16 = False
bf16 = True
per_device_train_batch_size = 1
per_device_eval_batch_size = 2
gradient_accumulation_steps = 1
gradient_checkpointing = True
max_grad_norm = 0.3
learning_rate = 2e-4
weight_decay = 0.001
optim = "paged_adamw_32bit"
lr_scheduler_type = "constant"
max_steps = -1
warmup_ratio = 0.03
group_by_length = True
save_steps = 25
logging_steps = 25
max_seq_length = None
packing = False
device_map = {"": 0}
dataset = load_dataset(dataset_name, split="train")
compute_dtype = getattr(torch, bnb_4bit_compute_dtype)
bnb_config = BitsAndBytesConfig(
load_in_4bit=use_4bit,
bnb_4bit_quant_type=bnb_4bit_quant_type,
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=use_nested_quant,
)
if compute_dtype == torch.float16 and use_4bit:
major, _ = torch.cuda.get_device_capability()
if major >= 8:
print("=" * 80)
print("Your GPU supports bfloat16: accelerate training with bf16=True")
print("=" * 80)
# Load base model
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map=device_map # Pass in the device map
)
model.config.use_cache = False
model.config.pretraining_tp = 1
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right" # Fix weird overflow issue with fp16 training
peft_config = LoraConfig(
lora_alpha=lora_alpha,
lora_dropout=lora_dropout,
r=lora_r,
bias="none",
task_type="CAUSAL_LM",
)
training_arguments = TrainingArguments(
output_dir=output_dir,
num_train_epochs=num_train_epochs,
per_device_train_batch_size=per_device_train_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
optim=optim,
save_steps=save_steps,
logging_steps=logging_steps,
learning_rate=learning_rate,
weight_decay=weight_decay,
fp16=fp16,
bf16=bf16,
max_grad_norm=max_grad_norm,
max_steps=max_steps,
warmup_ratio=warmup_ratio,
group_by_length=group_by_length,
lr_scheduler_type=lr_scheduler_type,
report_to="tensorboard"
)
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=max_seq_length,
tokenizer=tokenizer,
args=training_arguments,
packing=packing,
)
trainer.train()
trainer.model.save_pretrained(new_model)`
Expected behavior
To get a finetunned model, this code worked with the 7B version model
About this issue
- Original URL
- State: closed
- Created 9 months ago
- Comments: 17 (6 by maintainers)
Ok my bad, i just replaced device_map=device_map by device_map=‘auto’ and it worked !
Yeah, it was just an error from my side, the “there is no .safetensors” was just a warning, and the error was indeed only related to a bad settings of my docker run command (with no links with the current issue)
Oh forget about, i was finally able to deploy the model without the safetensors 😉
Yeah it’s 100% solved ! I have my own model on the huggingfacehub, i will now try to API-se this model with text-generation-inference
Thanks again @younesbelkada