torchmetrics: MeanAveragePrecision is slow

🐛 Bug

It’s extremely slow to compute the mean-average-precision since torchmetrics > 0.6.0.

To Reproduce

I noticed that my training times have almost doubled since I upgraded torchmetrics from 0.6.0, because validation using the MAP / MeanAveragePrecision metric is so much slower. During validation steps I call update(), and in the end of a validation epoch I call compute() on the MeanAveragePrecision object.

I calculated the time that spent inside compute() with different torchmetrics versions:

  • torchmetrics 0.6.0: 12 s
  • torchmetrics 0.6.1: didn’t work for some reason
  • torchmetrics 0.6.2: 9.5 min
  • torchmetrics 0.7.0: 9.4 min
  • torchmetrics 0.7.1: 1.9 min
  • torchmetrics 0.7.2: 2.0 min
  • torchmetrics 0.7.3: 1.9 min
  • torchmetrics 0.8.0: 4.5 min
  • torchmetrics 0.8.1: 4.6 min
  • torchmetrics 0.8.2: 4.6 min

It seems that after 0.6.0 the time to run compute() has increased from 10 seconds to 9.5 minutes. In 0.7.1 it was improved and took 2 minutes. Then in 0.8.0 things got worse again and it took 4.5 minutes to run compute(). This is more than 20x slower than with 0.6.0 and for example when training 100 epochs adds another 7 hours to the training time.

Environment

  • TorchMetrics version (and how you installed TM, e.g. conda, pip, build from source): 0.6.0 through 0.8.2, installed using pip
  • Python & PyTorch Version (e.g., 1.0): Python 3.8.11, PyTorch 1.10.0
  • Any other relevant information such as OS (e.g., Linux): Linux

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 8
  • Comments: 33 (17 by maintainers)

Most upvoted comments

Hi there,

I just fixed it. 😃 PR coming your way.

Run 1 - GPU:

Total time in init: 1.0306465921457857
Total time in update: 0.0780688391532749
Total time in compute: 241.5502170859836

Run 2 - GPU:

Total time in init: 1.086872072191909
Total time in update: 0.07920253812335432
Total time in compute: 2.4084888100624084

Here’s some kind of a benchmark script:

import time
import torch

try:
    from torchmetrics.detection import MeanAveragePrecision
except ImportError:
    from torchmetrics.detection import MAP
    MeanAveragePrecision = MAP

total_time = dict()

class UpdateTime:
    def __init__(self, name):
        self._name = name

    def __enter__(self):
        self._start_time = time.perf_counter()

    def __exit__(self, exc_type, exc_val, exc_tb):
        end_time = time.perf_counter()
        if self._name in total_time:
            total_time[self._name] += end_time - self._start_time
        else:
            total_time[self._name] = end_time - self._start_time
        return True

def generate(n):
    boxes = torch.rand(n, 4) * 1000
    boxes[:, 2:] += boxes[:, :2]
    labels = torch.randint(0, 10, (n,))
    scores = torch.rand(n)
    return {"boxes": boxes, "labels": labels, "scores": scores}

with UpdateTime("init"):
    map = MeanAveragePrecision()

for batch_idx in range(100):
    with UpdateTime("update"):
        detections = [generate(100) for _ in range(10)]
        targets = [generate(10) for _ in range(10)]
        map.update(detections, targets)

with UpdateTime("compute"):
    map.compute()

for name, time in total_time.items():
    print(f"Total time in {name}: {time}")

My results:

$ pip install torchmetrics==0.6.0
$ ./map_benchmark.py
Total time in init: 1.5747292000014568
Total time in update: 0.1246876999939559
Total time in compute: 6.245588799996767
$ pip install torchmetrics==0.8.2
$ ./map_benchmark.py
Total time in init: 0.0003580999909900129
Total time in update: 0.08986139997432474
Total time in compute: 151.69804470000963

@DataAndi, I was having the same problem with the 0.8.2 until I found this thread, then I downgraded to 0.6.0 as I am only using the mAP from torchmetrics. I hope the calculation from 0.6.0 is fine because it is much faster.

Does anyone else have very long compute times for metric.compute() ( Mean Average Precision). I have for version: 0.11.0 == 420s 011.1 == 394s 0.11.2 == 382s 011.3 == 371s 0.11.4 == 396s Used the above mentioned script to evaluate the computation time. Or is there maybe a way to compute it faster with cuda or something?

@stancld Hmm. The data’s not public. I wonder if you could debug it using random boxes, like in the speed test. I modified it to make the task a little bit easier and to make sure that the results are deterministic:

import torch
from torchmetrics.detection import MeanAveragePrecision

torch.manual_seed(1)

def generate(n):
    boxes = torch.rand(n, 4) * 10
    boxes[:, 2:] += boxes[:, :2] + 10
    labels = torch.randint(0, 2, (n,))
    scores = torch.rand(n)
    return {"boxes": boxes, "labels": labels, "scores": scores}

batches = []
for _ in range(100):
    detections = [generate(100) for _ in range(10)]
    targets = [generate(10) for _ in range(10)]
    batches.append((detections, targets))

map = MeanAveragePrecision()
for detections, targets in batches:
    map.update(detections, targets)
print(map.compute())

With torchmetrics 0.10.0 I get:

map: 0.1534 map_50: 0.5260 map_75: 0.0336 map_small: 0.1534 mar_1: 0.0449 mar_10: 0.3039 mar_100: 0.5445 mar_small: 0.5445

With the code from your PR I get

map: 0.2222 map_50: 0.7135 map_75: 0.0594 map_small: 0.2222 mar_1: 0.0449 mar_10: 0.4453 mar_100: 2.2028 mar_small: 2.2028

Some recall values are also > 1.

@24hours I think the way to go here would be to first try and clean up the code before we decide to dispatch to C++

@tkupek yeah, I am more and more moving in that direction. I would actually reintroduce pycocotools as an required dependency to MAP because we are seeing multiple issues that indicates that something is wrong with our implementation:

As the primary maintainer of TM, I do not have the expertise in MAP required to solve these issues and the metric is therefore unmaintainable at the moment. And relying on contributors from experts does not seem to be the solution (because you have other things to do). I would therefore much rather accept defeat and revert back to something, where all details about the calculation is dealt with by pycocotools and I only need to worry about the user interface.

One consequence that this will have, is that since v0.6 we have introduced the iou_type argument. Is it possible to convert the input when using iou_type="segm" to iou_type="bbox" such that we in both cases can rely on pycocotools to do the calculation? Else I would propose that we

  1. Reintroduce implementation from v0.6 called BBoxMeanAveragePrecision that corresponds to iou_type="bbox"
  2. Refactor current implemetation into new metric SegmMeanAveragePrecision that corresponds to `iou_type=“segm”

Pinging @Borda, @justusschock, @senarvi, @wilderrodrigues for opinions.

@Borda we can either try to improve our own version to get computational time down (if that is possible) or have an option to have Pycocotools as backend for users

@SkafteNicki How about this solution from your first response? Maybe we can make an optional pycocotools backend available so users can manually switch without downgrading to 0.6.0?

@heitorrapela @ckyrkou I am very happy about your interest, and we are trying to improve, but this is quite challenging, so any help would be very welcome… see some ongoing PRs: #1389 #1330

The implementation in this repo is quite fast if you want to look into it. https://github.com/MathGaron/mean_average_precision

@heitorrapela @ckyrkou I am very happy about your interest, and we are trying to improve, but this is quite challenging, so any help would be very welcome… see some ongoing PRs: #1389 #1330

@ckyrkou yes, they just changed the path and name definition. You should use as follow:

# This is to prevent the different definitions of mAP on diff versions of torchmetrics
try:
    from torchmetrics.detection import MeanAveragePrecision
except ImportError:
    from torchmetrics.detection import MAP
    MeanAveragePrecision = MAP

my_map = MeanAveragePrecision()

@ckyrkou I think there is no problem, at least with using the 0.6.0. The main problem is compatibility with other libraries. I tried to upgrade other libraries like torchvision and torch with its last versions, but there is no compatibility with the torchmetrics 0.6.0, so I will keep the ones that I am using for now. I don’t think they will remove 0.6.0 👍🏻

I have just installed 0.6.0 to try it out. When I import from torchmetrics.detection.mean_ap import MeanAveragePrecision

I get an error that mean_ap does not exist. I guess things changed between versions. Any idea how this used to be used?