coremltools: Error converting from PyTorch to CoreML

I’m trying to convert a UNet model from pytorch to coreml and I’m getting the following error:

Traceback (most recent call last):
  File "convert_coreml.py", line 24, in <module>
    ctModel = ct.convert(trace,
  File "C:\Miniconda3\envs\lines\lib\site-packages\coremltools\converters\_converters_entry.py", line 292, in convert
    proto_spec = _convert(
  File "C:\Miniconda3\envs\lines\lib\site-packages\coremltools\converters\mil\converter.py", line 120, in _convert
    prog = frontend_converter(model, **kwargs)
  File "C:\Miniconda3\envs\lines\lib\site-packages\coremltools\converters\mil\converter.py", line 62, in __call__
    return load(*args, **kwargs)
  File "C:\Miniconda3\envs\lines\lib\site-packages\coremltools\converters\mil\frontend\torch\load.py", line 73, in load
    converter = TorchConverter(torchscript, inputs, outputs, cut_at_symbols)
  File "C:\Miniconda3\envs\lines\lib\site-packages\coremltools\converters\mil\frontend\torch\converter.py", line 140, in __init__
    raw_graph, params_dict = self._expand_and_optimize_ir(self.torchscript)
  File "C:\Miniconda3\envs\lines\lib\site-packages\coremltools\converters\mil\frontend\torch\converter.py", line 354, in _expand_and_optimize_ir
    _torch._C._jit_pass_canonicalize_ops(graph)
AttributeError: module 'torch._C' has no attribute '_jit_pass_canonicalize_ops'

I’m using pytorch nightly and coremltools 4.0b1 on Windows. Here’s a simple code to test this:

import torch
import coremltools as ct

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = torch.hub.load('mateuszbuda/brain-segmentation-pytorch', 'unet',
    in_channels=3, out_channels=1, init_features=32, pretrained=True)
model = model.to(device)

model.eval()
dummy = torch.randn(1, 3, 512, 512).to(device)
trace = torch.jit.trace(model, dummy)

ctModel = ct.convert(trace, 
                     inputs=[ct.ImageType(name="input", shape=dummy.shape)#, 
                     #outputs=[ct.ImageType(name="output", shape=ct.Shape(shape=(1, 512, 512)))])

ctModel.save('C:\\unet.coreml')

Any ideas why this code gets that error? There are no special layers, and UNet ops are pretty standard. Oh yeah, if I try to set the outputs parameters I get this exception: ValueError: outputs must not be specified for PyTorch. Any idea when this will be enabled? I appreciate any help.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 1
  • Comments: 16 (2 by maintainers)

Commits related to this issue

Most upvoted comments

Hi @DawerG

So here’s the output after using the current master of coremltools:

Converting Frontend ==> MIL Ops: 24%|██████████▋ | 183/750 [00:00<00:01, 435.77 ops/s]WARNING:root:Saving value type of float16 into a builtin type of i8, might lose precision! Converting Frontend ==> MIL Ops: 29%|████████████▋ | 216/750 [00:00<00:01, 394.56 ops/s]WARNING:root:Saving value type of float16 into a builtin type of i8, might lose precision! Converting Frontend ==> MIL Ops: 41%|██████████████████▏ | 310/750 [00:00<00:01, 304.28 ops/s]WARNING:root:Saving value type of float16 into a builtin type of i8, might lose precision! Converting Frontend ==> MIL Ops: 45%|███████████████████▉ | 339/750 [00:00<00:01, 281.43 ops/s]WARNING:root:Saving value type of float16 into a builtin type of i8, might lose precision! Converting Frontend ==> MIL Ops: 54%|███████████████████████▉ | 408/750 [00:01<00:01, 315.19 ops/s] Traceback (most recent call last): File “convert_coreml.py”, line 25, in <module> ctModel = ct.convert(trace, File “C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters_converters_entry.py”, line 292, in convert proto_spec = _convert( File “C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\converter.py”, line 120, in _convert prog = frontend_converter(model, **kwargs) File “C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\converter.py”, line 62, in call return load(*args, **kwargs) File “C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\frontend\torch\load.py”, line 86, in load raise e File “C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\frontend\torch\load.py”, line 76, in load prog = converter.convert() File “C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\frontend\torch\converter.py”, line 302, in convert convert_nodes(self.context, self.graph) File “C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\frontend\torch\ops.py”, line 55, in convert_nodes _add_op(context, node) File “C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\frontend\torch\ops.py”, line 301, in add add_node = mb.add(x=add_inputs[0], y=add_inputs[1], name=node.name) File “C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\mil\ops\registry.py”, line 62, in add_op return cls._add_op(op_cls, **kwargs) File “C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\mil\builder.py”, line 191, in _add_op new_op.type_value_inference() File “C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\mil\operation.py”, line 181, in type_value_inference output_types = self.type_inference() File “C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\mil\ops\defs\elementwise_binary.py”, line 43, in type_inference ret_shape = broadcast_shapes(shapea, shapeb) File “C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\mil\ops\defs_utils.py”, line 42, in broadcast_shapes raise ValueError( ValueError: Incompatible dim 2 in shapes (1, 128, -128, -128) vs. (1, 128, 128, 128)

I used float16 when possible during training to enable larger batch sizes. But I don’t know why it’s trying to convert float16 to i8 during this conversion process. Also why is producing this incompatible shapes?? Thanks.