pytorch_geometric: Not found error for torch_sparse::ptr2ind in torchscript
❓ Questions & Help
I tried to use pytorch model with MessagePassing
layer in C++ code.
As described in pytorch_geometric documentation,
I generate torch model with my own MP layer and successfully convert the model.
But in the process of executing C++ code, I face the error like below:
Unknown builtin op: torch_sparse::ptr2ind.
Could not find any similar ops to torch_sparse::ptr2ind. This op may not exist or may not be currently supported in TorchScript.
:
File "/home/sr6/kyuhyun9.lee/env_ML/lib/python3.6/site-packages/torch_sparse/storage.py", line 166
rowptr = self._rowptr
if rowptr is not None:
row = torch.ops.torch_sparse.ptr2ind(rowptr, self._col.numel())
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
self._row = row
return row
Serialized File "code/__torch__/torch_sparse/storage.py", line 825
if torch.__isnot__(rowptr, None):
rowptr13 = unchecked_cast(Tensor, rowptr)
row15 = ops.torch_sparse.ptr2ind(rowptr13, torch.numel(self._col))
~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
self._row = row15
_150, _151 = True, row15
'SparseStorage.row' is being compiled since it was called from 'SparseStorage.__init__'
File "/home/sr6/kyuhyun9.lee/env_ML/lib/python3.6/site-packages/torch_sparse/storage.py", line 133
if not is_sorted:
idx = self._col.new_zeros(self._col.numel() + 1)
idx[1:] = self._sparse_sizes[1] * self.row() + self._col
~~~~~~~~ <--- HERE
if (idx[1:] < idx[:-1]).any():
perm = idx[1:].argsort()
Serialized File "code/__torch__/torch_sparse/storage.py", line 267
idx = torch.new_zeros(self._col, [_29], dtype=None, layout=None, device=None, pin_memory=None)
_30 = (self._sparse_sizes)[1]
_31 = torch.add(torch.mul((self).row(), _30), self._col, alpha=1)
~~~~~~~~~~ <--- HERE
_32 = torch.slice(idx, 0, 1, 9223372036854775807, 1)
_33 = torch.copy_(_32, _31, False)
'SparseStorage.__init__' is being compiled since it was called from 'GINLayerJittable_d54f76.__check_input____1'
Serialized File "code/__torch__/GINLayerJittable_d54f76.py", line 40
pass
return the_size
def __check_input____1(self: __torch__.GINLayerJittable_d54f76.GINLayerJittable_d54f76,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~... <--- HERE
edge_index: __torch__.torch_sparse.tensor.SparseTensor,
size: Optional[Tuple[int, int]]) -> List[Optional[int]]:
Aborted (core dumped)
Since I have no experience of pytorch jit, I cannot find any clue to solve this. How can I handle this error?
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 34 (14 by maintainers)
I finally fixed this issue, both for
torch-scatter
andtorch-sparse
:torch-scatter
: https://github.com/rusty1s/pytorch_scatter/pull/278torch-sparse
: https://github.com/rusty1s/pytorch_sparse/pull/212Finally, I added a fully-working “PyG in C++” example to
pytorch_geometric/examples/cpp
. This example saves a “jittable” GNN model in Python, and loads and executes it in C++.Thanks for all the help and sorry that it took me so long to fix 😃 Hope that all issues are now resolved!
I have the same issue, but loading from a TorchScript model from Python code. Found a workaround: installing torch_sparse / scatter and
import torch_sparse
before loading the model.Thanks for the report and sorry for the delay in fixing this. @mananshah99 and I will look into this ASAP.
Thank you for the prompt response. I was not able to resolve it by adding namespaces. However I found a solution by compiling the c++ code together with the source files of torch-sparse.