mmdetection: AttributeError: 'COCO' object has no attribute 'get_cat_ids'
Thanks for your error report and we appreciate it a lot.
Checklist
- I have searched related issues but cannot get the expected help.
Describe the bug I was trying to train SSD300 on a custom dataset in my local system with COCO style annotations and encountered this error on training
Reproduction
- What command or script did you run?
python tools/train.py configs/custom_training/ssd300_coco.py
- Did you make any modifications on the code or config? Did you understand what you have modified?
- I copied ssd300_coco.py directly and only modified the data and annotation paths
- What dataset did you use?
- A subset of open images
Environment
- Please run
python mmdet/utils/collect_env.pyto collect necessary environment infomation and paste it here.
sys.platform: linux
Python: 3.7.6 (default, Jan 8 2020, 19:59:22) [GCC 7.3.0]
CUDA available: True
CUDA_HOME: /usr
NVCC: Cuda compilation tools, release 10.1, V10.1.243
GPU 0: GeForce GTX 1070
GCC: gcc (Ubuntu 9.3.0-10ubuntu2) 9.3.0
PyTorch: 1.5.0
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.5
- Magma 2.5.2
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
TorchVision: 0.6.0
OpenCV: 4.2.0
MMCV: 0.5.9
MMDetection: 2.0.0+8fc0542
MMDetection Compiler: GCC 9.3
MMDetection CUDA Compiler: 10.1
- You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch [e.g., pip, conda, source]
- Other environment variables that may be related (such as
$PATH,$LD_LIBRARY_PATH,$PYTHONPATH, etc.)
Error traceback If applicable, paste the error trackback here.
...
...
load_from = None
resume_from = None
workflow = [('train', 1)]
work_dir = './work_dirs/ssd300_coco'
gpu_ids = range(0, 1)
2020-06-06 00:18:26,654 - root - INFO - load model from: open-mmlab://vgg16_caffe
2020-06-06 00:18:26,691 - mmdet - WARNING - The model and loaded state dict do not match exactly
missing keys in source state_dict: extra.0.weight, extra.0.bias, extra.1.weight, extra.1.bias, extra.2.weight, extra.2.bias, extra.3.weight, extra.3.bias, extra.4.weight, extra.4.bias, extra.5.weight, extra.5.bias, extra.6.weight, extra.6.bias, extra.7.weight, extra.7.bias, l2_norm.weight
loading annotations into memory...
Done (t=0.09s)
creating index...
index created!
Traceback (most recent call last):
File "tools/train.py", line 161, in <module>
main()
File "tools/train.py", line 136, in main
datasets = [build_dataset(cfg.data.train)]
File "/home/yyr/Documents/github/mmdetection/mmdet/datasets/builder.py", line 56, in build_dataset
build_dataset(cfg['dataset'], default_args), cfg['times'])
File "/home/yyr/Documents/github/mmdetection/mmdet/datasets/builder.py", line 63, in build_dataset
dataset = build_from_cfg(cfg, DATASETS, default_args)
File "/home/yyr/anaconda3/lib/python3.7/site-packages/mmcv/utils/registry.py", line 168, in build_from_cfg
return obj_cls(**args)
File "/home/yyr/Documents/github/mmdetection/mmdet/datasets/custom.py", line 71, in __init__
self.data_infos = self.load_annotations(self.ann_file)
File "/home/yyr/Documents/github/mmdetection/mmdet/datasets/coco.py", line 38, in load_annotations
self.cat_ids = self.coco.get_cat_ids(cat_names=self.CLASSES)
AttributeError: 'COCO' object has no attribute 'get_cat_ids'
Bug fix If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 4
- Comments: 16 (11 by maintainers)
Commits related to this issue
- fix bug in py3 infer (#2913) — committed to liuhuiCNN/mmdetection by jerrywgz 5 years ago
- Add refit_full logging (#2913) — committed to FANGAreNotGnu/mmdetection by Innixma a year ago
I think the problem comes from this commit - https://github.com/open-mmlab/mmdetection/pull/2088, which changes the API names for COCO in the
coco.pyfile. You can either:coco.pyfile to the previous version viagit checkout 206107 -- mmdet/datasets/coco.pypip install -U "git+https://github.com/open-mmlab/cocoapi.git#subdirectory=pycocotools"same question here. I think we update pycocotools to some higher version my solution is change: get_cat_ids —> getCatIds get_img_ids —> getImgIds … if you are lazy to change their them one by one, copy here bro (I provide the code under the variable CLASSES = (‘…’,‘…’,‘…’,) )
We had taken back control of the name “pycocotools” on pypi. Now the package is updated to be the same as github.
Can’t mmlab keep aliases to the old function names in their fork? That way people who use the git version of the official coco api (which is up to date with numpy changes) don’t have to change the coco file in mmdet?
I personally don’t think forcing people to use your fork of the coco api is the way to go.
@Mxbonn We do not want to keep our own fork at all if the official one was well maintained.
Our fork contains both the original and the snake case method names. It solves the following problems and we think the benefits suppress the drawbacks.
numpy==xxx, matplotlib==xxxxthat are unnecessary, and the authors are not responding to issues.)When
pycocotools(either pypi or offical github version) already exists in the environment, runningpip install "git+https://github.com/open-mmlab/cocoapi.git#subdirectory=pycocotools"may not work. In https://github.com/open-mmlab/cocoapi/pull/5 , this issue should already be fixed in https://github.com/open-mmlab/cocoapi/pull/5.