mmdetection: TypeError: forward_train() missing 2 required positional arguments: 'gt_bboxes' and 'gt_labels'

When I use config to train my own data, workflow is set to [(‘train’, 1)] and the training runs normally. When workflow is set to [(‘train’, 1),(‘val’, 1)], val reports the following error: File "/media/nnir712/F264A15264A119FD/zzh/detect/mmcv/mmcv/runner/runner.py", line 265, in train [0/1828] self.model, data_batch, train_mode=True, **kwargs) File "/media/nnir712/F264A15264A119FD/zzh/detect/mmdetection/mmdet/apis/train.py", line 38, in batch_processor losses = model(**data) File "/home/nnir712/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/home/nnir712/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 151, in forward return self.module(*inputs[0], **kwargs[0]) File "/home/nnir712/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/media/nnir712/F264A15264A119FD/zzh/detect/mmdetection/mmdet/core/fp16/decorators.py", line 49, in new_func return old_func(*args, **kwargs) File "/media/nnir712/F264A15264A119FD/zzh/detect/mmdetection/mmdet/models/detectors/base.py", line 86, in forward return self.forward_train(img, img_meta, **kwargs) TypeError: forward_train() missing 2 required positional arguments: 'gt_bboxes' and 'gt_labels'

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 23 (2 by maintainers)

Most upvoted comments

perhaps reduce learn rate will solve your problem

Thanks for reporting the bug.

There is no gt in test_pipeline which leads this error. Here is a temporary solution:

  1. Add a val_pipiline in the config:
val_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
    dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
    dict(type='RandomFlip', flip_ratio=0.0),
    dict(type='Normalize', **img_norm_cfg),
    dict(type='Pad', size_divisor=32),
    dict(type='DefaultFormatBundle'),
    dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
]
  1. Modify data.val.pipeline:
pipeline=val_pipeline,

https://github.com/open-mmlab/mmdetection/blob/82c533bee0de1a84f5959e257815eb7df4e69162/configs/mask_rcnn_r50_fpn_1x.py#L156

Hi, I meet the same problem. It may because config file use test_pipeline in both val and test procedure, like this:

train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='c', with_bbox=True),
    dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
    dict(type='RandomFlip', flip_ratio=0.5),
    dict(type='Normalize', **img_norm_cfg),
    dict(type='Pad', size_divisor=32),
    dict(type='DefaultFormatBundle'),
    dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=(1333, 800),
        flip=False,
        transforms=[
            dict(type='Resize', keep_ratio=True),
            dict(type='RandomFlip'),
            dict(type='Normalize', **img_norm_cfg),
            dict(type='Pad', size_divisor=32),
            dict(type='ImageToTensor', keys=['img']),
            dict(type='Collect', keys=['img']),
        ])
]
data = dict(
    imgs_per_gpu=2,
    workers_per_gpu=2,
    train=dict(
        type=dataset_type,
        ann_file=data_root + 'annotations/instances_train2017.json',
        img_prefix=data_root + 'train2017/',
        pipeline=train_pipeline),
    val=dict(
        type=dataset_type,
        ann_file=data_root + 'annotations/instances_val2017.json',
        img_prefix=data_root + 'val2017/',
        pipeline=test_pipeline),
    test=dict(
        type=dataset_type,
        ann_file=data_root + 'annotations/instances_val2017.json',
        img_prefix=data_root + 'val2017/',
        pipeline=test_pipeline))

But test_pipeline do not need to get gt_bboxes and gt_labels. So we may need to insert LoadAnnotations into pipeline and insert gt_bboxes and gt_labels into Collect keys.

Here I explain what the issue is and why it happens. Documentations will be added.

For validation, we can use either losses or other metrics, but the pipelines to compute losses and mAP are different. The former is the same as training, and the latter is the same as testing.

Since the final model is evaluated with mAP, we adopt mAP as the default validation metric in MMDetection, so we need to use test_pipline for the val set.

The above validation is implemented with the evaluation hook, which is not included in the workflow, so we just specify work_flow=[('train', 1)] instead of work_flow=[('train', 1), ('val', 1)]. This is the recommended and default validation setting.

However, if we just want to observe losses in the val set instead of evaluating mAP, we can specify work_flow=[('train', 1), ('val', 1)] and use the train_pipeline.