transformers: Can't pickle local object using the finetuning example.

I was testing out the finetuning example from the repo:

python run_lm_finetuning.py --train_data_file="finetune-output/KantText.txt" --output_dir="finetune-output/hugkant" --model_type=gpt2 --model_name_or_path=gpt2 --do_train --block_size=128

While saving the checkpoint, it gives the following error:


Traceback (most recent call last):
  File "run_lm_finetuning.py", line 790, in <module>
    main()
  File "run_lm_finetuning.py", line 740, in main
    global_step, tr_loss = train(args, train_dataset, model, tokenizer)
  File "run_lm_finetuning.py", line 398, in train
    torch.save(scheduler.state_dict(), os.path.join(output_dir, "scheduler.pt"))
  File "D:\Software\Python\lib\site-packages\torch\serialization.py", line 209, in save
    return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
  File "D:\Software\Python\lib\site-packages\torch\serialization.py", line 134, in _with_file_like
    return body(f)
  File "D:\Software\Python\lib\site-packages\torch\serialization.py", line 209, in <lambda>
    return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
  File "D:\Software\Python\lib\site-packages\torch\serialization.py", line 282, in _save
    pickler.dump(obj)
AttributeError: Can't pickle local object 'get_linear_schedule_with_warmup.<locals>.lr_lambda'

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 12
  • Comments: 16 (1 by maintainers)

Most upvoted comments

Same error with all newest version too.

image

For running the example scripts passing --no_multi_process solved it for me.

I haven’t looked into the huggingface code yet but I could imagine that this is the bug here. I think it only shows up when spawn instead of fork is used to create new processes, which is why the developers might have missed it.

Hi,

I was able to get rid of this error by upgrading the torch version.