LLaVA: [Usage] tokenization mismatch when finetuning v1.5-7b
Describe the issue
Issue: I have found some threads reporting the tokenization mismatch problem, but I am still confused. I download the v1.5-7b weight from https://huggingface.co/liuhaotian/llava-v1.5-7b/tree/main , and finetune on datasets in the paper. I adapt the command line to make it run on V100. tokenizers.version == ‘0.14.1’
Command:
WANDB_MODE=disabled deepspeed llava/train/train.py \
--deepspeed ./scripts/zero3.json \
--model_name_or_path /path/to/llm_weights/llava-v1.5-7b \
--version v1 \
--data_path ./playground/data/llava_v1_5_mix665k.json \
--image_folder ./playground/data \
--vision_tower /path/to/llm_weights/clip-vit-large-patch14-336 \
--pretrain_mm_mlp_adapter /path/to/llm_weights/llava-v1.5-7b/mm_projector.bin \
--mm_projector_type mlp2x_gelu \
--mm_vision_select_layer -2 \
--mm_use_im_start_end False \
--mm_use_im_patch_token False \
--image_aspect_ratio pad \
--group_by_modality_length True \
--bf16 False \
--fp16 True \
--output_dir ./checkpoints/llava-v1.5-7b \
--num_train_epochs 1 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 1 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 50000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--tf32 False \
--model_max_length 2048 \
--gradient_checkpointing True \
--dataloader_num_workers 4 \
--lazy_preprocess True \
Screenshots:
About this issue
- Original URL
- State: open
- Created 8 months ago
- Reactions: 1
- Comments: 23 (2 by maintainers)
@haotian-liu In my experiment, tokenizer set “use_fast=True” works , with transformers==4.34.1 and tokenizers==0.14.1. But don’t know why mismatch when set “use_fast=False”
The truth is “USER” will be tokenized as [11889] in the middle of the prompt, but tokenized as [1, 3148, 1001] in the head (with an automatically added bos token)
I tried to fix this WARNING by:
Setting use_fast=True works for my case.