openvino: "RuntimeError: Cannot get dims for non static shape" while inferencing openvino
Hello! I have a strange issue while inferencing openvino (version 2022.1.0). There is a simple class:
import openvino.runtime as ov
import numpy as np
class OpenVinoEngine():
def __init__(self, model_fpath, num_workers):
core = ov.Core()
raw_model = core.read_model(model_fpath, "AUTO")
self.model = core.compile_model(raw_model, "CPU", config={"INFERENCE_NUM_THREADS": str(num_workers)})
self.infer_request = self.model.create_infer_request() # line 1
def process(self, batch):
# self.infer_request = self.model.create_infer_request() # line 2
self.infer_request.infer([batch])
output = [out.data[:] for out in self.infer_request.output_tensors]
if len(output) > 1:
return output
return output[0]
eng = OpenVinoEngine('model.onnx', 8)
out = eng.process(np.random.rand(1, 3, 32, 256))
The engine is working when passing my model with dynamic input shape {?,3,?,?} (model.onnx.zip) and other models too. But there is a huge memory leak - my full pipeline (which has a couple of these engines with different models), which took about 3GB with onnxruntime inference, suddenly after 10 requests takes up to 5-6GB and keeps increasing.
So I try to comment line 1 and uncomment line 2, but there is an error on the very first request:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_2333043/2507853659.py in <module>
----> 1 out = eng.process(np.random.rand(1, 3, 32, 256))
/tmp/ipykernel_2333043/1809003609.py in process(self, batch)
9
10 def process(self, batch):
---> 11 self.infer_request = self.model.create_infer_request()
12 self.infer_request.infer([batch])
13 output_tensors = [out.data[:] for out in self.infer_request.output_tensors]
~/env/lib/python3.8/site-packages/openvino/runtime/ie_api.py in create_infer_request(self)
157 :rtype: openvino.runtime.InferRequest
158 """
--> 159 return InferRequest(super().create_infer_request())
160
161 def infer_new_request(self, inputs: Union[dict, list] = None) -> dict:
RuntimeError: Cannot get dims for non static shape
The error says that something is wrong with dynamic shape but in case with line 1 it works perfectly with the same model. Maybe someone has faced similar problems? Also maybe there is a simpler way to deal with this memory issue? Any clues would be highly appreciated.
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 17 (3 by maintainers)
@korotaS I have opened a bug with the development team to look further into it. Converting your ONNX model to IR with static shapes does not show this issue. The error is seen when using your onnx model and converted model to IR with dynamic shapes. I will let you know what I find out.
Ref. 86887