peft: TypeError: dispatch_model() got an unexpected keyword argument 'offload_index'
================================
ruuning the following code on kaggle notebook give me this error
`import os os.environ[“CUDA_VISIBLE_DEVICES”]=“0” import torch import torch.nn as nn import bitsandbytes as bnb from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained( “bigscience/bloom-7b1”, load_in_8bit=True, device_map=‘auto’, )
tokenizer = AutoTokenizer.from_pretrained(“bigscience/bloom-7b1”)`
======================================
on kaggle notebook give me the this error
======================================
`--------------------------------------------------------------------------- TypeError Traceback (most recent call last) /tmp/ipykernel_40247/570955428.py in <module> 9 “bigscience/bloom-7b1”, 10 load_in_8bit=True, —> 11 device_map=‘auto’, 12 ) 13
/opt/conda/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 470 model_class = _get_model_class(config, cls._model_mapping) 471 return model_class.from_pretrained( –> 472 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs 473 ) 474 raise ValueError(
/opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 2695 # Dispatch model with hooks on all devices if necessary 2696 if device_map is not None: -> 2697 dispatch_model(model, device_map=device_map, offload_dir=offload_folder, offload_index=offload_index) 2698 2699 if output_loading_info:
TypeError: dispatch_model() got an unexpected keyword argument ‘offload_index’`
About this issue
- Original URL
- State: closed
- Created a year ago
- Reactions: 4
- Comments: 24
Commits related to this issue
- fix for >3 GPUs, due to artidoro/qlora #186 — committed to Titus-von-Koeller/peft by Titus-von-Koeller 7 months ago
- Bnb integration test tweaks (#1242) * allow bitsandbytes integration test selection * fix typo: mutli -> multi * enable tests to run on >2 GPUs * fix for >3 GPUs, due to artidoro/qlora #186 ... — committed to huggingface/peft by Titus-von-Koeller 7 months ago
- Bnb integration test tweaks (#1242) * allow bitsandbytes integration test selection * fix typo: mutli -> multi * enable tests to run on >2 GPUs * fix for >3 GPUs, due to artidoro/qlora #186 ... — committed to TaoSunVoyage/peft by Titus-von-Koeller 7 months ago
@imrankh46 – the issue is that your
accelerate
version is out of date. As @younesbelkada suggested, try something like:Not the Package Problem. Kaggle NB Sucks!
Just Restart the kernel and run the package update lines first. It Should be Work fine.
that works for me, thank you!