mmdetection: AssertionError: The `num_classes` (3) in Shared2FCBBoxHead of MMDataParallel does not matches the length of `CLASSES` 80) in CocoDataset

loading annotations into memory… Done (t=0.00s) creating index… index created! 2021-03-24 16:40:44,706 - mmdet - INFO - Start running, host: jingduo@jingduo-laptop, work_dir: /media/jingduo/8aeddfe4-d52c-4516-85ec-aa500a9390d1/jingduo/mmdetection/work_dirs/cascade_rcnn_x101_32x4d_fpn_1x_coco 2021-03-24 16:40:44,707 - mmdet - INFO - workflow: [(‘train’, 1)], max: 12 epochs Traceback (most recent call last): File “./tools/train.py”, line 190, in <module> main() File “./tools/train.py”, line 186, in main meta=meta) File “/home/jingduo/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmdet/apis/train.py”, line 170, in train_detector runner.run(data_loaders, cfg.workflow) File “/home/jingduo/mmcv/mmcv/runner/epoch_based_runner.py”, line 125, in run epoch_runner(data_loaders[i], **kwargs) File “/home/jingduo/mmcv/mmcv/runner/epoch_based_runner.py”, line 45, in train self.call_hook(‘before_train_epoch’) File “/home/jingduo/mmcv/mmcv/runner/base_runner.py”, line 307, in call_hook getattr(hook, fn_name)(self) File “/home/jingduo/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmdet/datasets/utils.py”, line 150, in before_train_epoch self._check_head(runner) File “/home/jingduo/anaconda3/envs/mmdet/lib/python3.7/site-packages/mmdet/datasets/utils.py”, line 137, in _check_head (f’The num_classes ({module.num_classes}) in ’ AssertionError: The num_classes (3) in Shared2FCBBoxHead of MMDataParallel does not matches the length of CLASSES 80) in CocoDataset

Can you help answer the question ? thanks

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 15

Most upvoted comments

2021-03-25 16:23:29,621 - mmdet - INFO - Environment info:

sys.platform: linux Python: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0] CUDA available: True GPU 0: GeForce RTX 2070 Super CUDA_HOME: /usr/local/cuda-11.0 NVCC: Build cuda_11.0_bu.TC445_37.28540450_0 GCC: gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 PyTorch: 1.7.0 PyTorch compiling details: PyTorch built with:

  • GCC 7.3
  • C++ Version: 201402
  • Intel® Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel® 64 architecture applications
  • Intel® MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 11.0
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_37,code=compute_37
  • CuDNN 8.0.3
  • Magma 2.5.2
  • Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

TorchVision: 0.8.0

OpenCV: 4.5.1 MMCV: 1.2.7 MMCV Compiler: GCC 7.3 MMCV CUDA Compiler: 11.0 MMDetection: 2.10.0+61e27d2 2021-03-25 16:23:29,846 - mmdet - INFO - Distributed training: False 2021-03-25 16:23:30,069 - mmdet - INFO - Config: model = dict( type=‘FasterRCNN’, pretrained=None, backbone=dict( type=‘ResNet’, depth=50, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=dict(type=‘BN’, requires_grad=True), norm_eval=True, style=‘pytorch’), neck=dict( type=‘FPN’, in_channels=[256, 512, 1024, 2048], out_channels=256, num_outs=5), rpn_head=dict( type=‘RPNHead’, in_channels=256, feat_channels=256, anchor_generator=dict( type=‘AnchorGenerator’, scales=[8], ratios=[0.5, 1.0, 2.0], strides=[4, 8, 16, 32, 64]), bbox_coder=dict( type=‘DeltaXYWHBBoxCoder’, target_means=[0.0, 0.0, 0.0, 0.0], target_stds=[1.0, 1.0, 1.0, 1.0]), loss_cls=dict( type=‘CrossEntropyLoss’, use_sigmoid=True, loss_weight=1.0), loss_bbox=dict(type=‘L1Loss’, loss_weight=1.0)), roi_head=dict( type=‘StandardRoIHead’, bbox_roi_extractor=dict( type=‘SingleRoIExtractor’, roi_layer=dict(type=‘RoIAlign’, output_size=7, sampling_ratio=0), out_channels=256, featmap_strides=[4, 8, 16, 32]), bbox_head=dict( type=‘Shared2FCBBoxHead’, in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=1, bbox_coder=dict( type=‘DeltaXYWHBBoxCoder’, target_means=[0.0, 0.0, 0.0, 0.0], target_stds=[0.1, 0.1, 0.2, 0.2]), reg_class_agnostic=False, loss_cls=dict( type=‘CrossEntropyLoss’, use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type=‘L1Loss’, loss_weight=1.0))), train_cfg=dict( rpn=dict( assigner=dict( type=‘MaxIoUAssigner’, pos_iou_thr=0.7, neg_iou_thr=0.3, min_pos_iou=0.3, match_low_quality=True, ignore_iof_thr=-1), sampler=dict( type=‘RandomSampler’, num=256, pos_fraction=0.5, neg_pos_ub=-1, add_gt_as_proposals=False), allowed_border=-1, pos_weight=-1, debug=False), rpn_proposal=dict( nms_pre=2000, max_per_img=1000, nms=dict(type=‘nms’, iou_threshold=0.7), min_bbox_size=0), rcnn=dict( assigner=dict( type=‘MaxIoUAssigner’, pos_iou_thr=0.5, neg_iou_thr=0.5, min_pos_iou=0.5, match_low_quality=False, ignore_iof_thr=-1), sampler=dict( type=‘RandomSampler’, num=512, pos_fraction=0.25, neg_pos_ub=-1, add_gt_as_proposals=True), pos_weight=-1, debug=False)), test_cfg=dict( rpn=dict( nms_pre=1000, max_per_img=1000, nms=dict(type=‘nms’, iou_threshold=0.7), min_bbox_size=0), rcnn=dict( score_thr=0.05, nms=dict(type=‘nms’, iou_threshold=0.5), max_per_img=100))) dataset_type = ‘CocoDataset’ data_root = ‘data/coco/’ img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type=‘LoadImageFromFile’), dict(type=‘LoadAnnotations’, with_bbox=True), dict(type=‘Resize’, img_scale=(1333, 800), keep_ratio=True), dict(type=‘RandomFlip’, flip_ratio=0.5), dict( type=‘Normalize’, mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type=‘Pad’, size_divisor=32), dict(type=‘DefaultFormatBundle’), dict(type=‘Collect’, keys=[‘img’, ‘gt_bboxes’, ‘gt_labels’]) ] test_pipeline = [ dict(type=‘LoadImageFromFile’), dict( type=‘MultiScaleFlipAug’, img_scale=(1333, 800), flip=False, transforms=[ dict(type=‘Resize’, keep_ratio=True), dict(type=‘RandomFlip’), dict( type=‘Normalize’, mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type=‘Pad’, size_divisor=32), dict(type=‘ImageToTensor’, keys=[‘img’]), dict(type=‘Collect’, keys=[‘img’]) ]) ] data = dict( samples_per_gpu=2, workers_per_gpu=2, train=dict( type=‘CocoDataset’, ann_file=‘data/coco/annotations/instances_train2017.json’, img_prefix=‘data/coco/train2017/’, pipeline=[ dict(type=‘LoadImageFromFile’), dict(type=‘LoadAnnotations’, with_bbox=True), dict(type=‘Resize’, img_scale=(1333, 800), keep_ratio=True), dict(type=‘RandomFlip’, flip_ratio=0.5), dict( type=‘Normalize’, mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type=‘Pad’, size_divisor=32), dict(type=‘DefaultFormatBundle’), dict(type=‘Collect’, keys=[‘img’, ‘gt_bboxes’, ‘gt_labels’]) ]), val=dict( type=‘CocoDataset’, ann_file=‘data/coco/annotations/instances_val2017.json’, img_prefix=‘data/coco/val2017/’, pipeline=[ dict(type=‘LoadImageFromFile’), dict( type=‘MultiScaleFlipAug’, img_scale=(1333, 800), flip=False, transforms=[ dict(type=‘Resize’, keep_ratio=True), dict(type=‘RandomFlip’), dict( type=‘Normalize’, mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type=‘Pad’, size_divisor=32), dict(type=‘ImageToTensor’, keys=[‘img’]), dict(type=‘Collect’, keys=[‘img’]) ]) ]), test=dict( type=‘CocoDataset’, ann_file=‘data/coco/annotations/instances_val2017.json’, img_prefix=‘data/coco/val2017/’, pipeline=[ dict(type=‘LoadImageFromFile’), dict( type=‘MultiScaleFlipAug’, img_scale=(1333, 800), flip=False, transforms=[ dict(type=‘Resize’, keep_ratio=True), dict(type=‘RandomFlip’), dict( type=‘Normalize’, mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type=‘Pad’, size_divisor=32), dict(type=‘ImageToTensor’, keys=[‘img’]), dict(type=‘Collect’, keys=[‘img’]) ]) ])) evaluation = dict(interval=1, metric=‘bbox’) optimizer = dict(type=‘SGD’, lr=0.02, momentum=0.9, weight_decay=0.0001) optimizer_config = dict(grad_clip=None) lr_config = dict( policy=‘step’, warmup=‘linear’, warmup_iters=500, warmup_ratio=0.001, step=[8, 11]) runner = dict(type=‘EpochBasedRunner’, max_epochs=12) checkpoint_config = dict(interval=2) log_config = dict(interval=10, hooks=[dict(type=‘TextLoggerHook’)]) custom_hooks = [dict(type=‘NumClassCheckHook’)] dist_params = dict(backend=‘nccl’) log_level = ‘INFO’ load_from = None resume_from = None workflow = [(‘train’, 1)] work_dir = ‘./work_dirs/faster_rcnn_r50_fpn_1x_coco’ gpu_ids = range(0, 1)

loading annotations into memory… Done (t=0.00s) creating index… index created! loading annotations into memory… Done (t=0.00s) creating index… index created! 2021-03-25 16:23:31,823 - mmdet - INFO - Start running, host: wwh@wwh, work_dir: /opt/ai/projects/github/mmdetection/mmdetection/work_dirs/faster_rcnn_r50_fpn_1x_coco 2021-03-25 16:23:31,823 - mmdet - INFO - workflow: [(‘train’, 1)], max: 12 epochs Traceback (most recent call last): File “tools/train.py”, line 187, in main() File “tools/train.py”, line 183, in main meta=meta) File “/opt/ai/projects/github/mmdetection/mmdetection/mmdet/apis/train.py”, line 170, in train_detector runner.run(data_loaders, cfg.workflow) File “/home/wwh/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py”, line 125, in run epoch_runner(data_loaders[i], **kwargs) File “/home/wwh/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py”, line 45, in train self.call_hook(‘before_train_epoch’) File “/home/wwh/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/base_runner.py”, line 308, in call_hook getattr(hook, fn_name)(self) File “/opt/ai/projects/github/mmdetection/mmdetection/mmdet/datasets/utils.py”, line 150, in before_train_epoch self._check_head(runner) File “/opt/ai/projects/github/mmdetection/mmdetection/mmdet/datasets/utils.py”, line 137, in _check_head (f’The num_classes ({module.num_classes}) in ’ AssertionError: The num_classes (1) in Shared2FCBBoxHead of MMDataParallel does not matches the length of CLASSES 4) in CocoDataset

Don’t write like that; (‘person’ )

You must class name like that; (‘person’, )

If you have one class; you should use a comma after the class name.

Please post all your configuration files

model = dict( type=‘CascadeRCNN’, pretrained=‘torchvision://resnet50’, backbone=dict( type=‘ResNet’, depth=50, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=dict(type=‘BN’, requires_grad=True), norm_eval=True, style=‘pytorch’), neck=dict( type=‘FPN’, in_channels=[256, 512, 1024, 2048], out_channels=256, num_outs=5), rpn_head=dict( type=‘RPNHead’, in_channels=256, feat_channels=256, anchor_generator=dict( type=‘AnchorGenerator’, scales=[8], ratios=[0.5, 1.0, 2.0], strides=[4, 8, 16, 32, 64]), bbox_coder=dict( type=‘DeltaXYWHBBoxCoder’, target_means=[.0, .0, .0, .0], target_stds=[1.0, 1.0, 1.0, 1.0]), loss_cls=dict( type=‘CrossEntropyLoss’, use_sigmoid=True, loss_weight=1.0), loss_bbox=dict(type=‘SmoothL1Loss’, beta=1.0 / 9.0, loss_weight=1.0)), roi_head=dict( type=‘CascadeRoIHead’, num_stages=3, stage_loss_weights=[1, 0.5, 0.25], bbox_roi_extractor=dict( type=‘SingleRoIExtractor’, roi_layer=dict(type=‘RoIAlign’, output_size=7, sampling_ratio=0), out_channels=256, featmap_strides=[4, 8, 16, 32]), bbox_head=[ dict( type=‘Shared2FCBBoxHead’, in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=3, bbox_coder=dict( type=‘DeltaXYWHBBoxCoder’, target_means=[0., 0., 0., 0.], target_stds=[0.1, 0.1, 0.2, 0.2]), reg_class_agnostic=True, loss_cls=dict( type=‘CrossEntropyLoss’, use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type=‘SmoothL1Loss’, beta=1.0, loss_weight=1.0)), dict( type=‘Shared2FCBBoxHead’, in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=3, bbox_coder=dict( type=‘DeltaXYWHBBoxCoder’, target_means=[0., 0., 0., 0.], target_stds=[0.05, 0.05, 0.1, 0.1]), reg_class_agnostic=True, loss_cls=dict( type=‘CrossEntropyLoss’, use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type=‘SmoothL1Loss’, beta=1.0, loss_weight=1.0)), dict( type=‘Shared2FCBBoxHead’, in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=3, bbox_coder=dict( type=‘DeltaXYWHBBoxCoder’, target_means=[0., 0., 0., 0.], target_stds=[0.033, 0.033, 0.067, 0.067]), reg_class_agnostic=True, loss_cls=dict( type=‘CrossEntropyLoss’, use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type=‘SmoothL1Loss’, beta=1.0, loss_weight=1.0)) ]),

model training and testing settings

train_cfg=dict( rpn=dict( assigner=dict( type=‘MaxIoUAssigner’, pos_iou_thr=0.7, neg_iou_thr=0.3, min_pos_iou=0.3, match_low_quality=True, ignore_iof_thr=-1), sampler=dict( type=‘RandomSampler’, num=256, pos_fraction=0.5, neg_pos_ub=-1, add_gt_as_proposals=False), allowed_border=0, pos_weight=-1, debug=False), rpn_proposal=dict( nms_pre=2000, max_per_img=2000, nms=dict(type=‘nms’, iou_threshold=0.7), min_bbox_size=0), rcnn=[ dict( assigner=dict( type=‘MaxIoUAssigner’, pos_iou_thr=0.5, neg_iou_thr=0.5, min_pos_iou=0.5, match_low_quality=False, ignore_iof_thr=-1), sampler=dict( type=‘RandomSampler’, num=512, pos_fraction=0.25, neg_pos_ub=-1, add_gt_as_proposals=True), pos_weight=-1, debug=False), dict( assigner=dict( type=‘MaxIoUAssigner’, pos_iou_thr=0.6, neg_iou_thr=0.6, min_pos_iou=0.6, match_low_quality=False, ignore_iof_thr=-1), sampler=dict( type=‘RandomSampler’, num=512, pos_fraction=0.25, neg_pos_ub=-1, add_gt_as_proposals=True), pos_weight=-1, debug=False), dict( assigner=dict( type=‘MaxIoUAssigner’, pos_iou_thr=0.7, neg_iou_thr=0.7, min_pos_iou=0.7, match_low_quality=False, ignore_iof_thr=-1), sampler=dict( type=‘RandomSampler’, num=512, pos_fraction=0.25, neg_pos_ub=-1, add_gt_as_proposals=True), pos_weight=-1, debug=False) ]), test_cfg=dict( rpn=dict( nms_pre=1000, max_per_img=1000, nms=dict(type=‘nms’, iou_threshold=0.7), min_bbox_size=0), rcnn=dict( score_thr=0.05, nms=dict(type=‘nms’, iou_threshold=0.5), max_per_img=100)))

this is cascade_rcnn_r50_fpn.py content

Please check your dataset number or classes names in “mmdet/datasets/coco.py” and “mmdet/core/evaluation/class_names.py” . You must change with your class name in these codes.

Have you solved the problem?AssertionError: The num_classes (5) in Shared2FCBBoxHead of MMDataParallel does not matches the length of CLASSES 80) in CocoDataset

Hi i solve with before you build datasets, you must run this: import mmdet mmdet.datasets.coco.CocoDataset.CLASSES=('class_1','class2') you can adjust it with the dataset you have 😃

It seems that only num_classes is modified but the class names are not modified in dataset. The config should specify classes in dataset.