server: Unclear torch model failure message

Description The above message was observed in the output log . Wondering what is causing that and how to fix.

Triton Information What version of Triton are you using? nvcr.io/nvidia/tritonserver:20.08-py3

Are you using the Triton container or did you build it yourself? container

To Reproduce Not really sure

Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).

Framework : pytorch_libtorch

config.pbtxt:

platform: "pytorch_libtorch"
max_batch_size: 0
input [
        {
        name: "input__0"
        data_type: TYPE_FP32
        dims:   3
        dims:  -1
        dims:  -1
      }
]
output [
      {
        name: "output__0"
        data_type: TYPE_FP32
        dims: 1
        dims: 100
      },
      {
        name: "output__1"
        data_type: TYPE_FP32
        dims: 1
        dims: 100
      },
      {
        name: "output__2"
        data_type: TYPE_FP32
        dims: 1
        dims: 100
        dims:  4
      }
]

Expected behavior Should not see reference to CPU.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 15 (8 by maintainers)

Most upvoted comments

Did you use --gpus=1 flag when running the container?