peft: ImportError: cannot import name 'prepare_model_for_training' from 'peft'
Hey I got this error after running this code. Which is strange since it worked perfectly last night
My code:
`
Select CUDA device index
import os import torch
os.environ[“CUDA_VISIBLE_DEVICES”] = “0”
from datasets import load_dataset from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_name = “google/flan-t5-xxl”
model_name = ‘google/flan-t5-large’
model = AutoModelForSeq2SeqLM.from_pretrained(model_name, torch_dtype=torch.float16, load_in_8bit=True, device_map=“auto”) tokenizer = AutoTokenizer.from_pretrained(model_name)
from peft import prepare_model_for_training
model = prepare_model_for_training(model)
`
ERROR:
ImportError Traceback (most recent call last) Input In [3], in <cell line: 1>() ----> 1 from peft import prepare_model_for_training 3 model = prepare_model_for_training(model)
ImportError: cannot import name ‘prepare_model_for_training’ from ‘peft’ (/usr/local/lib/python3.9/dist-packages/peft/init.py)
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 18 (2 by maintainers)
Can confirm installing from
transformers
main
works now.The
output_embedding_layer_name
argument shouldn’t be necessary anymore.Thanks everyone for your detailed guidance!
It is on the
main
branch now, you can directly usetransformers
main
branch and issue should be solved, feel free to close the issue if you think that it has been solved! 😉Oh perfect, thank you so much!
It fails at
get_peft_model()
:So add this after:
Will get a stacktrace soon (having too much fun finetuning it 😃 )
Awesome, thank you so much!
Hi @jordancole21 Thanks for the issue, you should use
prepare_model_for_int8_training
instead, the examples have been updated accordingly. Also make sure to use themain
branch ofpeft
Thanks!