coremltools: Unable to register MIL op alias of `matmul` (named `my_matmul`) using pure NumPy
❓Question
I have to register np.linalg.svd
function as MIL op so that I can use it to register it as PyTorch op using @register_torch_op
. I am taking this approach because SVD is written in FORTAN under-the-hood (I think) in NumPy & I cannot translate it to MIL ops to register PyTorch op.
So, I am trying to write my own mb.my_matmul
(alias of mb.matmul
). Following this I will implement mb.svd
. Here’s what I do.
Looking at the coremltools source code looks like MIL op (mb.matmul
) is written using np.matmul
itself.
https://github.com/apple/coremltools/blob/534a5079d22e1320bcf5d42c1c0385f3f2330d4f/coremltools/converters/mil/mil/ops/defs/linear.py#L81
It’s imported in init.py file. https://github.com/apple/coremltools/blob/534a5079d22e1320bcf5d42c1c0385f3f2330d4f/coremltools/converters/mil/mil/ops/defs/__init__.py#L113
Then it’s exposed to nn builder using @register_mil_to_nn_mapping
.
https://github.com/apple/coremltools/blob/534a5079d22e1320bcf5d42c1c0385f3f2330d4f/coremltools/converters/mil/backend/nn/op_mapping.py#L1493
So I simply copy the body of both class matmul(Operation):
and def matmul(const_context, builder, op):
for my_matmul
and I also import it in init.py.
Then I uninstall coremltools and go the root directory of my own edited coremtools repository. In terminal I write the following code to use my custom my_matmul
implementation but no luck 😞.
import coremltools as ct
from coremltools.converters.mil import Builder as mb
@mb.program(input_specs=[mb.TensorSpec(shape=(1, 3, 32, 32))])
def model(input_data):
x = mb.my_matmul(x=input_data, y=input_data)
return x
coreml_model = ct.convert(model, inputs=[ct.TensorType("input", shape=(1, 3, 32, 32))], convert_to="mlprogram")
The following error is thrown.
Running MIL Common passes: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 34/34 [00:00<00:00, 3354.02 passes/s]
Running MIL FP16ComputePrecision pass: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 476.68 passes/s]
Running MIL Clean up passes: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 1965.47 passes/s]
/Users/rahulbhalley/Desktop/coremltools/coremltools/models/model.py:121: RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was: Error compiling model: "compiler error: Encountered an error while compiling a neural network model: in operation of type my_matmul: Unknown operator 'my_matmul'.".
_warnings.warn(
It simply says, "compiler error: Encountered an error while compiling a neural network model: in operation of type my_matmul: Unknown operator 'my_matmul'."
.
I don’t understand how to expose my NumPy operation to MIL builder (mb.custom_function
). Any help is highly appreciated!!
System Information
- N/A
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 23
@RahulBhalley actually, this isn’t within my skillset. I have no experience with coremltools, PyTorch, or any Python libraries for that matter (except for part of an intro tutorial to numpy). Also, I’m racing against time to finish MetalXLA before PyTorch finishes their Metal backend. If there’s some way my particular skillset is suited to helping you solve this problem, please let me know.
I agree this is confusing. MIL ops only use NumPy for value inference, which is run at conversion time. They are not used (and not available) when models are being executed on device.