doods2: Deepstack model error

Love the new rewrite! I’d been using v1 for some time and just moved over to v2 container recently. Great stuff!

The recent addition of the deepstack models does not seem to be working. I see in issue #28 after it was closed that there’s a report that trying to get a deepstack model working resulted in a error. I’m getting the same error with two different .pt models

Error:

2022-03-12 21:30:35,452 - uvicorn.access - INFO - 172.17.0.1:36888 - "POST /detect HTTP/1.1" 500
2022-03-12 21:30:35,452 - uvicorn.error - ERROR - Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/uvicorn/protocols/http/h11_impl.py", line 373, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/usr/local/lib/python3.8/dist-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/fastapi/applications.py", line 208, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/applications.py", line 112, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc
  File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/base.py", line 57, in __call__
    task_group.cancel_scope.cancel()
  File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 574, in __aexit__
    raise exceptions[0]
  File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/base.py", line 30, in coro
    await self.app(scope, request.receive, send_stream.send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/exceptions.py", line 82, in __call__
    raise exc
  File "/usr/local/lib/python3.8/dist-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 656, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 259, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 61, in app
    response = await func(request)
  File "/usr/local/lib/python3.8/dist-packages/fastapi/routing.py", line 226, in app
    raw_response = await run_endpoint_function(
  File "/usr/local/lib/python3.8/dist-packages/fastapi/routing.py", line 159, in run_endpoint_function
    return await dependant.call(**values)
  File "/opt/doods/api.py", line 40, in detect
    detect_response = self.doods.detect(detect_request)
  File "/opt/doods/doods.py", line 127, in detect
    ret = detector.detect(image)
  File "/opt/doods/detectors/deepstack.py", line 45, in detect
    results = self.torch_model(image, augment=False)[0]
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/models/yolo.py", line 126, in forward
    return self._forward_once(x, profile, visualize)  # single-scale inference, train
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/models/yolo.py", line 149, in _forward_once
    x = m(x)  # run
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/models/yolo.py", line 61, in forward
    if self.inplace:
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1177, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Detect' object has no attribute 'inplace'

Both of the models work with the deepstack service itself, and doods properly reports the labels embedded in the models

    {
      "name": "dark",
      "type": "deepstack",
      "model": "external/models/dark.pt",
      "labels": [
        "Bicycle",
        "Boat",
        "Bottle",
        "Bus",
        "Car",
        "Cat",
        "Chair",
        "Cup",
        "Dog",
        "Motorbike",
        "People",
        "Table"
      ],
      "width": 0,
      "height": 0
    },
    {
      "name": "combined",
      "type": "deepstack",
      "model": "external/models/combined.pt",
      "labels": [
        "person",
        "bicycle",
        "car",
        "motorcycle",
        "bus",
        "truck",
        "bird",
        "cat",
        "dog",
        "horse",
        "sheep",
        "cow",
        "bear",
        "deer",
        "rabbit",
        "raccoon",
        "fox",
        "coyote",
        "possum",
        "skunk",
        "squirrel",
        "pig",
        ""
      ],
      "width": 0,
      "height": 0
    }

so the models seem intact.

My config for the two models is minimal:

    - name: dark
      type: deepstack
      modelFile: external/models/dark.pt
    - name: combined
      type: deepstack
      modelFile: external/models/combined.pt

do I need more than that or is there some issue with the deepstack integration at the moment?

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 42 (10 by maintainers)

Most upvoted comments

Soooo this is interesting… It appears to be some sort of collision with having a pytorch detector enabled along with the deepstack detector. If I disable pytorch the logo detector works fine. That gives me something reproducible to look into at least.

@ozett make sure you are pulling the latest image. I made a change that should fix the NoneType error you see. @JustinGeorgi it should automatically reorder the detectors so you don’t need to worry about the order you put in your config.

Okay, I think I fixed part of the issue. It was missing some of the deepstack trainer files it needed to start the dark model. I pushed an image if you want to try that. The issue still remains if you also load the latest yolo model as well, it will print the other error about mismatched tensor sizes. I am still trying to figure that one out.

I can confirm that deepstack models fail to load if I remove pytorch from my configuration.

2022-03-20 16:41:30,151 - doods.doods - ERROR - Could not create detector deepstack/dark: No module named 'models.yolo'
2022-03-20 16:41:30,151 - doods.doods - ERROR - Could not create detector deepstack/combined: No module named 'models.yolo'