pytorch-lightning: Trainer property `log_dir` cannot be accessed before model is bound to trainer
🐛 Bug
Tentatives to access a Trainer
instance’s log_dir
property prior to calling fit
results in an error:
TypeError: expected str, bytes or os.PathLike object, not NoneType
Accessing log_dir
only works after calling fit
(or from within the training loop itself, once the setup required by fit
has been executed).
To Reproduce
See bug reproduced with the BoringModel
here.
Expected behavior
The Trainer’s log_dir
property should be accessible as soon as the Trainer is instantiated.
If this isn’t possible because of some implementation detail, the documentation should at least be updated to clearly state this limitation.
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 22 (22 by maintainers)
@rohitgr7 nice PR, makes much more sense this way 😃 @nathanpainchaud max_steps = 1 is another option 😃
@awaelchli Thanks for your recommendation! I discovered the
limit_{train|val|test}_batches
yesterday. Setting all of them to 1, along withmax_epochs=1
seems to achieve the in-depth test of my whole pipeline I wanted, although with a slightly less convenient config. I can understand the design decision forfast_dev_run
, although I don’t think I would be the only user confused by the different logging behavior betweenfast_dev_run
vs. “normal mode”.Anyways, thanks to @rohitgr7 for his rapid intervention, and to both of you for your help and feedback 😃
@awaelchli actually the checkpoint callback is not required since it uses logger to decide the dirpath if it’s not set else it does nothing. Either way, it’s not required at all. Created a PR (WIP) linked to the issue for a fix. Let’s see how it goes.
well, when using
fast_dev_run
logging is disabled and withlogger=True
the logger will be initialized withDummyLogger
(basically means no logging). withfast_dev_run=0/False
, it will be initialized withTensorBoardLogger
.will check 😃