ssd.pytorch: ValueError: not enough values to unpack (expected 2, got 0)
I am getting error when I try to use the pretrained model:
python demo/live.py --weights ./weights/ssd300_mAP_77.43_v2.pth
/home/cya/git_clones/ssd.pytorch/ssd.py:34: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
self.priors = Variable(self.priorbox.forward(), volatile=True)
/home/cya/git_clones/ssd.pytorch/layers/modules/l2norm.py:17: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_.
init.constant(self.weight,self.gamma)
[INFO] starting threaded video stream...
Traceback (most recent call last):
File "demo/live.py", line 82, in <module>
cv2_demo(net.eval(), transform)
File "demo/live.py", line 55, in cv2_demo
frame = predict(frame)
File "demo/live.py", line 25, in predict
y = net(x) # forward pass
File "/home/cya/anaconda3/envs/tensorflow/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/home/cya/git_clones/ssd.pytorch/ssd.py", line 103, in forward
self.priors.type(type(x.data)) # default boxes
File "/home/cya/git_clones/ssd.pytorch/layers/functions/detection.py", line 54, in forward
ids, count = nms(boxes, scores, self.nms_thresh, self.top_k)
ValueError: not enough values to unpack (expected 2, got 0)
FATAL: exception not rethrown
[1] 11209 abort (core dumped) python demo/live.py --weights ./weights/ssd300_mAP_77.43_v2.pth
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 12
- Comments: 30
I changed the code as follows and worked for me. In detection.py of line 62 from
to
if dets.dim() == 1 to if dets.size(0) == 1
After I change the code , it turns out to be File “eval.py”, line 438, in <module> thresh=args.confidence_threshold) File “eval.py”, line 395, in test_net boxes = dets[:, 1:] IndexError: too many indices for tensor of dimension 1 How can I solve it?
I had this problem and I changed scores.dim() to scores.size(0) but here is the problem: I can’t detect anything in the video? what should I do? Should I change the BaseTransform initialization? Now it’s like this: transform = BaseTransform(net.size, (104/256.0, 117/256.0, 123/256.0))
I have the same problem , by changing scores.dim() to scores.size(0) , I can’t detect aything in video , how to solve that ?
just the same question, you can restart jupyter notebook to run again , then its work.
Excuse me, I changed the demo of ‘if scores.dim() == 0:’ to ‘if scores.size(0) == 0:’, but still got the error that ValueError: not enough values to unpack (expected 2, got 0). I am using pytorch=0.4.0, The error will miss if I install a version of 0.3.1. Can you help me with that? Thanks a lot
i have the same problem!!
@dongfengxijian just do like @KeyKy . change the code in the eval.py line 393
I meet the error as wynntw too. How can I solve it ? Traceback (most recent call last): File “eval.py”, line 438, in <module> thresh=args.confidence_threshold) File “eval.py”, line 395, in test_net boxes = dets[:, 1:] IndexError: too many indices for tensor of dimension 1