TensorRT: 🐛 [Bug] Cannot convert any detection model from torchvision nor detectron

Bug Description

Hi, I have hard times to export retinanet or mask-rcnn. I successfully traced models both from torchvision and detectron, but it fails to compile. Since there are others who has problems with torchvision models, I would be happy if you could resolve this problem once and for all. The only model for object detection which works is SSD from nvidia torch hub, but supporting standart torchvision models would be a huge win.

To Reproduce

Steps to reproduce the behavior: Note that traced_model works well, no problem here

import torch
import torchvision
import torch_tensorrt

model = torchvision.models.detection.retinanet_resnet50_fpn(num_classes=1, pretrained=False)
model = model.eval().to("cuda")

traced_model = torch.jit.trace(model, torch.randn((1, 3, 640, 640)).to("cuda"), strict=False)
torch.jit.save(traced_model, "resnet_50_traced.jit.pt")

trt_model_fp32 = torch_tensorrt.compile(traced_model, **{
    "inputs": [torch.ones((1, 3, 640, 640), dtype=torch.float32).cuda()],
    "enabled_precisions": {torch.float32}, # Run with FP32
    "workspace_size": 1 << 22
})

Error: “”" Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions “”"

Expected behavior

I expect it to work, like magic 😃 !

Environment

Standart docker build from your dockerfile, base pytorch image is 21.10

Additional context

Do you think you could provide us with the tutorial how to deploy at least one detection model from torchvision or detectron? end to end? Thank you!

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 4
  • Comments: 18 (1 by maintainers)

Most upvoted comments

@narendasan any news on this?

Yes, we are working on an end to end demo for detection models