mmdetection: TypeError: forward_train() missing 2 required positional arguments: 'gt_bboxes' and 'gt_labels'
When I use config to train my own data, workflow is set to [(‘train’, 1)] and the training runs normally. When workflow is set to [(‘train’, 1),(‘val’, 1)], val reports the following error:
File "/media/nnir712/F264A15264A119FD/zzh/detect/mmcv/mmcv/runner/runner.py", line 265, in train [0/1828] self.model, data_batch, train_mode=True, **kwargs) File "/media/nnir712/F264A15264A119FD/zzh/detect/mmdetection/mmdet/apis/train.py", line 38, in batch_processor losses = model(**data) File "/home/nnir712/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/home/nnir712/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 151, in forward return self.module(*inputs[0], **kwargs[0]) File "/home/nnir712/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/media/nnir712/F264A15264A119FD/zzh/detect/mmdetection/mmdet/core/fp16/decorators.py", line 49, in new_func return old_func(*args, **kwargs) File "/media/nnir712/F264A15264A119FD/zzh/detect/mmdetection/mmdet/models/detectors/base.py", line 86, in forward return self.forward_train(img, img_meta, **kwargs) TypeError: forward_train() missing 2 required positional arguments: 'gt_bboxes' and 'gt_labels'
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 23 (2 by maintainers)
perhaps reduce learn rate will solve your problem
Thanks for reporting the bug.
There is no gt in
test_pipelinewhich leads this error. Here is a temporary solution:val_pipilinein the config:data.val.pipeline:https://github.com/open-mmlab/mmdetection/blob/82c533bee0de1a84f5959e257815eb7df4e69162/configs/mask_rcnn_r50_fpn_1x.py#L156
Hi, I meet the same problem. It may because config file use test_pipeline in both val and test procedure, like this:
But test_pipeline do not need to get
gt_bboxesandgt_labels. So we may need to insertLoadAnnotationsinto pipeline and insertgt_bboxesandgt_labelsintoCollectkeys.Here I explain what the issue is and why it happens. Documentations will be added.
For validation, we can use either losses or other metrics, but the pipelines to compute losses and mAP are different. The former is the same as training, and the latter is the same as testing.
Since the final model is evaluated with mAP, we adopt mAP as the default validation metric in MMDetection, so we need to use
test_piplinefor the val set.The above validation is implemented with the evaluation hook, which is not included in the workflow, so we just specify
work_flow=[('train', 1)]instead ofwork_flow=[('train', 1), ('val', 1)]. This is the recommended and default validation setting.However, if we just want to observe losses in the val set instead of evaluating mAP, we can specify
work_flow=[('train', 1), ('val', 1)]and use thetrain_pipeline.