onnx: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
I convert a pytorch model to onnx.
example = torch.rand(10, 3, 224, 224)
torch.onnx.export(net, # model being run
example, # model input (or a tuple for multiple inputs)
"./infer/tsm_resnet50.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
# operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK,
dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes
'output' : {0 : 'batch_size'}})
And then it show me that: this is my log file log.txt the problem snippet:
out = torch.zeros_like(x)
out[:, :-1, :fold] = x[:, 1:, :fold] # shift left
out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold] # shift right
out[:, :, 2 * fold:] = x[:, :, 2 * fold:] # not shift
how can replace it ? thanks my version: python3.6 torch 1.2.0 torchvison 0.4.0
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 22
- Comments: 20 (3 by maintainers)
Anyone explain it, please? it´s an issue for prediction ? how to solve it?
I´m facing the same warning with Python=3.7.9, pytorch=1.6.0, onnxruntime=1.4.0, onxruntime-tools=1.4.2 when converting a bert model.
TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! input_tensor.shape == tensor_shape for input_tensor in input_tensors
If you’re still encountering issues on the latest PyTorch nightly, please open an issue in github.com/pytorch/pytorch with full reproduction instructions. This is definitely not a bug in ONNX.
@jcwchen can you please close this issue?
Replace position_ids to this formal 159 can fix the first warning.
I’m still facing the same issue while converting to the ONNX
(yolact) C:\yolo\yolact\yolact>python eval.py --trained_model=weights/yolact_base_35_50000.pth --score_threshold=0.15 --top_k=15 --image=DJI_20220303103545_0005_Z9KytSZw9.JPG Multiple GPUs detected! Turning off JIT. Config not specified. Parsed yolact_base_config from the file name.
Loading model… Done. C:\yolo\yolact\yolact\yolact.py:221: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if self.last_img_size != (cfg._tmp_img_w, cfg._tmp_img_h):
Uh. I accidentally posted this at the wrong place…
I didn’t have time to carefully look over this. I am trying to rewrite resnet2d so that it can be compatible with my video card, in case it is never made compatible by any other means. By trying to figure out where this problem originates, I feel like it might be due to some lack of libs for my video card? The errors say that I could try to use torch.Tensor instead of python normals, and this was my failed attempt at doing that (might be close):
Gives the output:
ImportError: cannot import name 'Downsample2D' from 'diffusers.models.resnet' (path/to/resnet.py).I’ll chime in here to say that I get the same message when exporting a PyTorch model for use in C++, not using ONNX. So this might belong in the PyTorch repo instead.