stanford_alpaca: Problem with finetuning bloom
What is the fsdp_transformer_layer_cls_to_wrap for bloom?
When I tried to fine tune with bloomz-7b1, the training stuck on 0%. As you said in the readme, it’s most likely because I dont set the right fsdp_transformer_layer_cls_to_wrap . But I cant find it in the bloom config.
Kindly need a help on this. Thank you
About this issue
- Original URL
- State: open
- Created a year ago
- Comments: 19
any help on this?
I used this to run with the original training script:
torchrun --nproc_per_node=3 --master_port=5001 train.py \ --model_name_or_path bigscience/bloomz-7b1 \ --data_path ./alpaca_data.json \ --bf16 True \ --output_dir ./model_trained \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 8 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap ‘BloomBlock‘ \ --tf32 Trueand gets this error:
Exception: Could not find the transformer layer class to wrap in the model.certificado ambietnal gis.pdf