transformers: TypeError: cannot pickle '_LazyModule' object
@stas00 edit: please see https://github.com/huggingface/transformers/issues/12549#issuecomment-875287701 for the short reproduction script.
Environment info
transformersversion: 4.9.0.dev0- Platform: Linux with Nvidia P40
- Python version: 3.8.0
- PyTorch version (GPU?): 1.8.0
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
Who can help
@stas00 @patrickvonplaten, @LysandreJik
Information
Model I am using (Bert, XLNet …): GPT2
The problem arises when using:
- the official example scripts: (give details below)
- [√] my own modified scripts: (give details below)
The tasks I am working on is:
- an official GLUE/SQUaD task: (give the name)
- [√] my own task or dataset: (give details below)
To reproduce
I am running the minimal command:
python run_clm.py \
--model_name_or_path /mycheckpoin/ \
--train_file train.txt \
--validation_file eval.txt \
--do_train \
--do_eval \
--output_dir ./models/ \
--no_cuda False \
--fp16 \
--sharded_ddp simple \
--num_train_epochs 3.0 \
--disable_tqdm False \
--save_steps 100 \
--preprocessing_num_workers 32 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4
and I modified the following parts of the script ‘run_clm.py’, and the parameter rank passed in training_args.local_rank
def init_process(rank, size, fn, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank)
if __name__ == "__main__":
# main()
# size = int(os.environ['WORLD_SIZE'])
size = int(torch.cuda.device_count())
print(size)
processes = []
mp.set_start_method("spawn")
for rank in range(size):
p = mp.Process(target=init_process, args=(rank, main))
p.start()
processes.append(p)
for p in processes:
p.join()
the traceback informations are:
Process Process-2:
Traceback (most recent call last):
File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 511, in init_process
fn(rank, size)
File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 367, in main
tokenized_datasets = raw_datasets.map(
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 471, in map
{
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 472, in <dictcomp>
k: dataset.map(
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1736, in map
transformed_shards = [r.get() for r in results]
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1736, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/pool.py", line 771, in get
raise self._value
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/pool.py", line 537, in _handle_tasks
put(task)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py", line 498, in dump
StockPickler.dump(self, obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 487, in dump
self.save(obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 901, in save_tuple
save(element)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py", line 1493, in save_function
pickler.save_reduce(_create_function, (obj.__code__,
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 692, in save_reduce
save(args)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 901, in save_tuple
save(element)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle '_LazyModule' object
I run the following command based on the original script, it works well. The reason why I don’t use this command is that our cluster doesn’t support this way of passing parameters: "-m torch.distributed.launch --nproc_per_node=4 "
python -m torch.distributed.launch --nproc_per_node=4 run_clm.py \
--model_name_or_path /mycheckpoin/ \
--train_file train.txt \
--validation_file eval.txt \
--do_train \
--do_eval \
--output_dir ./models/ \
--no_cuda False \
--fp16 \
--sharded_ddp simple \
--num_train_epochs 3.0 \
--disable_tqdm False \
--save_steps 100 \
--preprocessing_num_workers 32 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4
Expected behavior
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 15 (9 by maintainers)
Note that we can easily make
_LazyModulepicklable. I can open a PR if needed to implement a__reduce__method for_LazyModule. It’s the only object that preventstransformersfrom being picklable.EDIT: here it is: https://github.com/huggingface/transformers/pull/12552
This is just a way to easily fix this issue, but I think we should definitely keep trying to figure out why it tried to pickle
transformersin the first place. This might come fromdillthat pickles the globals of some environments when pickling any objectOK, here is the minimal reproducible script. Totally unrelated to
transformersit seems except for the import oftransformersthis still fails with the same error.
But if you either:
import transformersnum_proc=1indatasets.map(instead ofn>1) all is good.@lhoestq, @albertvillanova - does this ring any bells? Clearly
transformersloads some module lazily and trips updatasetseven though transformers isn’t really used here directly. Thank you.Should be closed by #12567, please let us know if the problem persists.
Linking to the new PR: https://github.com/huggingface/transformers/pull/12567
hi albertvillanova, I removed import of transformers according to the following code, it still can’t work.
def _no_cache_fields(obj): try: if ( "PreTrainedTokenizerBase" in [base_class.__name__ for base_class in type(obj).__mro__] and hasattr(obj, "cache") and isinstance(obj.cache, dict) )Hi @stas00, thanks for pinging.
I’m having a look and after a first search, I think you are right and the problem comes from the fact that
transformersmakes a lazy import when importing it. I guess this affectsdatasetshere: https://github.com/huggingface/datasets/blob/master/src/datasets/utils/py_utils.py#L319 (PR: https://github.com/huggingface/datasets/pull/502), which is used by dumps to pickle objects in a multiprocessing setup.cc: @lhoestq