pytorch-forecasting: tft unable to set target to a list of strings (multiple targets)

  • PyTorch-Forecasting version: 0.8.5
  • PyTorch version: 1.8.1
  • Python version: 3.8.10
  • Operating System: linux 3.10.0-1160.25.1.el7.x86_64

according to doc: https://pytorch-forecasting.readthedocs.io/en/latest/api/pytorch_forecasting.data.timeseries.TimeSeriesDataSet.html the target parameter can be set to a list of strings indicating multiple variables for prediction However, during run time, the code spits the following error:

TypeError: new() received an invalid combination of arguments - got (list, int), but expected one of:
 * (*, torch.device device)
      didn't match because some of the arguments have invalid types: (list, int)
 * (torch.Storage storage)
 * (Tensor other)
 * (tuple of ints size, *, torch.device device)
 * (object data, *, torch.device device)

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 18 (4 by maintainers)

Most upvoted comments

Hi! I encountered the similar issue when trying to define MultiLoss as

I tried several setups: pytorch_forecasting 0.9.0 pytorch_lightning 1.4.2 pytorch 1.9.0 python 3.7.11 linux 18.04.5 pytorch_forecasting 0.9.1 pytorch_lightning 1.4.9 pytorch 1.8.0 python 3.8.12 linux 18.04.5

When I try to initialize the loss as loss=MultiLoss([QuantileLoss(), QuantileLoss(), QuantileLoss(), QuantileLoss(), QuantileLoss(), QuantileLoss()]) I encountered TypeError: ‘int’ object is not iterable while initializing the TFT.

How did you solve this issue @jdb78 @QitianMa?

Thank you!

@owoshch I encountered this problem when I only set loss with multiple loss but forgot to set the output_size to a list of output sizes.

But even if the loss and output_size are set correctly and the learning rate is a small value, I still have the same issue as @QitianMa . Output before the error message:

Validation sanity check:   0%|          | 0/1 [00:00<?, ?it/s]/Users/bytedance/opt/anaconda3/envs/pythonProject/lib/python3.8/site-packages/pytorch_forecasting/metrics.py:555: UserWarning: Loss is not finite. Resetting it to 1e9
  warnings.warn("Loss is not finite. Resetting it to 1e9")
Epoch 0:   0%|          | 0/166 [00:00<?, ?it/s]/Users/bytedance/opt/anaconda3/envs/pythonProject/lib/python3.8/site-packages/pytorch_forecasting/metrics.py:555: UserWarning: Loss is not finite. Resetting it to 1e9
  warnings.warn("Loss is not finite. Resetting it to 1e9")
Epoch 0:   0%|          | 0/166 [00:08<?, ?it/s]

Guess this does not happen immediately, right? It is most likely divergence leading to a nan without a grad. So probably a too high learning rate or you need to clip your gradients at a lower value.

This happens immediately. I tried gradient_clip_val=0.0001, and learning rate 1e-6. This issue persists.