pytorch-lightning: Lightning Module Test Returns
🐛 Bug
I am attempting to test my trained model with my test dataset. After running Trainer.test(), an empty dict is returned instead of the dict I am logging. I was previously just returning a dict from test_epoch_end() since using self.log() never returned anything from Trainer.test() for me. I previously had this working on PytorchLightning v1.2.1. After updating to v1.2.8, this no longer works and I am unable to get anything returned from Trainer.test(). When I print out the dict in test_epoch_end(), everything looks ok but nothing is returned. I have included my test loop below with the return method I used to do commented out. Thanks for the help!
Expected behavior
Expect to be able to return dicts from Trainer.test() for further analysis.
Environment
- PyTorch Version (e.g., 1.0): 1.8.1
- OS (e.g., Linux): Linux
- How you installed PyTorch (
conda,pip, source): Conda - Python version: 3.8.8
- PytorchLightning Version: 1.2.8
Additional context
def test_step(self, batch, batch_nb):
data, target = batch
output = self.forward(data)
loss = torch.nn.functional.cross_entropy(output, target)
output = torch.nn.functional.softmax(output, dim=1)
return {"test_loss": loss, "test_out": output, "test_true": target}
def test_epoch_end(self, outputs):
loss = torch.mean(torch.tensor([x["test_loss"] for x in outputs]))
test_true = torch.cat([x['test_true'] for x in outputs])
test_out = torch.cat([x['test_out'] for x in outputs])
out_dict = {"test_loss": loss, "test_out": test_out.cpu().numpy(), "test_true": test_true.cpu().numpy())}
print(out_dict)
self.log_dict(out_dict, logger=False)
# return out_dict
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 18 (16 by maintainers)
Amazing! Hit our slack channel if you have random questions or want to hear about announcements ❤️ Or the discussion forum here on github if more help is needed 😃
Ah, that will do it! I pulled 1.3.0rc1 and it worked. Thank for all the help! You guys are amazing 🎉