peft: ImportError: cannot import name 'prepare_model_for_training' from 'peft'

Hey I got this error after running this code. Which is strange since it worked perfectly last night

My code:

`

Select CUDA device index

import os import torch

os.environ[“CUDA_VISIBLE_DEVICES”] = “0”

from datasets import load_dataset from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

model_name = “google/flan-t5-xxl”

model_name = ‘google/flan-t5-large’

model = AutoModelForSeq2SeqLM.from_pretrained(model_name, torch_dtype=torch.float16, load_in_8bit=True, device_map=“auto”) tokenizer = AutoTokenizer.from_pretrained(model_name)

from peft import prepare_model_for_training

model = prepare_model_for_training(model)

`

ERROR:

ImportError Traceback (most recent call last) Input In [3], in <cell line: 1>() ----> 1 from peft import prepare_model_for_training 3 model = prepare_model_for_training(model)

ImportError: cannot import name ‘prepare_model_for_training’ from ‘peft’ (/usr/local/lib/python3.9/dist-packages/peft/init.py)

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 18 (2 by maintainers)

Most upvoted comments

Can confirm installing from transformers main works now.

The output_embedding_layer_name argument shouldn’t be necessary anymore.

Thanks everyone for your detailed guidance!

It is on the main branch now, you can directly use transformers main branch and issue should be solved, feel free to close the issue if you think that it has been solved! 😉

Thanks a lot for narrowing down the issue, this should be fixed in huggingface/transformers#21688 ! You can directly use it with pip install git+https://github.com/younesbelkada/transformers.git@fix-int8-conversion

Oh perfect, thank you so much!

It fails at get_peft_model():

So add this after:

lora_config = LoraConfig(
    r=16,
    lora_alpha=32,
    target_modules=["q", "v"],
    lora_dropout=0.05,
    bias="none",
    task_type="SEQ_2_SEQ_LM"
)


model = get_peft_model(model, lora_config)

Will get a stacktrace soon (having too much fun finetuning it 😃 )

Hi @jordancole21 Thanks for the issue, you should use prepare_model_for_int8_training instead, the examples have been updated accordingly. Also make sure to use the main branch of peft Thanks!

Awesome, thank you so much!

Hi @jordancole21 Thanks for the issue, you should use prepare_model_for_int8_training instead, the examples have been updated accordingly. Also make sure to use the main branch of peft Thanks!