tensorflow: TF ConvertedModel: Invoke fails with "Node number X (CONCATENATION) failed to prepare" error
System information
- OS: Windows 10:
- TensorFlow: 2.4.0:
Code used to infer : `
Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test the model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
# The function `get_tensor()` returns a copy of the tensor data.
# Use `tensor()` in order to get a pointer to the tensor.
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
`
Output: ` INFO: TfLiteFlexDelegate delegate: 15 nodes delegated out of 188 nodes with 2 partitions. INFO: TfLiteFlexDelegate delegate: 5 nodes delegated out of 12 nodes with 1 partitions. INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 0 nodes with 0 partitions. INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 3 nodes with 0 partitions. INFO: TfLiteFlexDelegate delegate: 2 nodes delegated out of 35 nodes with 2 partitions. INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 3 nodes with 0 partitions. INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 17 nodes with 0 partitions.
Traceback (most recent call last): File “src\models\net_converters\test_tflite_model.py”, line 127, in <module> test(args.model, args.test) File “src\models\net_converters\test_tflite_model.py”, line 31, in test interpreter.invoke() File “C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\lite\python\interpreter.py”, line 540, in invoke self._interpreter.Invoke() RuntimeError: tensorflow/lite/kernels/concatenation.cc:76 t->dims->data[d] != t0->dims->data[d] (400 != 1)Node number 33 (CONCATENATION) failed to prepare. Node number 3 (WHILE) failed to invoke. Node number 187 (WHILE) failed to invoke. `
TF Model summary ` Layer (type) Output Shape Param # Connected to
input_1 (InputLayer) [(None, 300, 300, 3) 0
identity_layer (Lambda) (None, 300, 300, 3) 0 input_1[0][0]
input_mean_normalization (Lambd (None, 300, 300, 3) 0 identity_layer[0][0]
input_channel_swap (Lambda) (None, 300, 300, 3) 0 input_mean_normalization[0][0]
conv1_1 (Conv2D) (None, 300, 300, 64) 1792 input_channel_swap[0][0]
conv1_2 (Conv2D) (None, 300, 300, 64) 36928 conv1_1[0][0]
pool1 (MaxPooling2D) (None, 150, 150, 64) 0 conv1_2[0][0]
conv2_1 (Conv2D) (None, 150, 150, 128 73856 pool1[0][0]
conv2_2 (Conv2D) (None, 150, 150, 128 147584 conv2_1[0][0]
pool2 (MaxPooling2D) (None, 75, 75, 128) 0 conv2_2[0][0]
conv3_1 (Conv2D) (None, 75, 75, 256) 295168 pool2[0][0]
conv3_2 (Conv2D) (None, 75, 75, 256) 590080 conv3_1[0][0]
conv3_3 (Conv2D) (None, 75, 75, 256) 590080 conv3_2[0][0]
pool3 (MaxPooling2D) (None, 38, 38, 256) 0 conv3_3[0][0]
conv4_1 (Conv2D) (None, 38, 38, 512) 1180160 pool3[0][0]
conv4_2 (Conv2D) (None, 38, 38, 512) 2359808 conv4_1[0][0]
conv4_3 (Conv2D) (None, 38, 38, 512) 2359808 conv4_2[0][0]
pool4 (MaxPooling2D) (None, 19, 19, 512) 0 conv4_3[0][0]
conv5_1 (Conv2D) (None, 19, 19, 512) 2359808 pool4[0][0]
conv5_2 (Conv2D) (None, 19, 19, 512) 2359808 conv5_1[0][0]
conv5_3 (Conv2D) (None, 19, 19, 512) 2359808 conv5_2[0][0]
pool5 (MaxPooling2D) (None, 19, 19, 512) 0 conv5_3[0][0]
fc6 (Conv2D) (None, 19, 19, 1024) 4719616 pool5[0][0]
fc7 (Conv2D) (None, 19, 19, 1024) 1049600 fc6[0][0]
conv6_1 (Conv2D) (None, 19, 19, 256) 262400 fc7[0][0]
conv6_padding (ZeroPadding2D) (None, 21, 21, 256) 0 conv6_1[0][0]
conv6_2 (Conv2D) (None, 10, 10, 512) 1180160 conv6_padding[0][0]
conv7_1 (Conv2D) (None, 10, 10, 128) 65664 conv6_2[0][0]
conv7_padding (ZeroPadding2D) (None, 12, 12, 128) 0 conv7_1[0][0]
conv7_2 (Conv2D) (None, 5, 5, 256) 295168 conv7_padding[0][0]
conv8_1 (Conv2D) (None, 5, 5, 128) 32896 conv7_2[0][0]
conv8_2 (Conv2D) (None, 3, 3, 256) 295168 conv8_1[0][0]
conv9_1 (Conv2D) (None, 3, 3, 128) 32896 conv8_2[0][0]
conv4_3_norm (L2Normalization) (None, 38, 38, 512) 512 conv4_3[0][0]
conv9_2 (Conv2D) (None, 1, 1, 256) 295168 conv9_1[0][0]
conv4_3_norm_mbox_conf (Conv2D) (None, 38, 38, 4220) 19449980 conv4_3_norm[0][0]
fc7_mbox_conf (Conv2D) (None, 19, 19, 5275) 48619675 fc7[0][0]
conv6_2_mbox_conf (Conv2D) (None, 10, 10, 5275) 24312475 conv6_2[0][0]
conv7_2_mbox_conf (Conv2D) (None, 5, 5, 5275) 12158875 conv7_2[0][0]
conv8_2_mbox_conf (Conv2D) (None, 3, 3, 4220) 9727100 conv8_2[0][0]
conv9_2_mbox_conf (Conv2D) (None, 1, 1, 4220) 9727100 conv9_2[0][0]
conv4_3_norm_mbox_loc (Conv2D) (None, 38, 38, 16) 73744 conv4_3_norm[0][0]
fc7_mbox_loc (Conv2D) (None, 19, 19, 20) 184340 fc7[0][0]
conv6_2_mbox_loc (Conv2D) (None, 10, 10, 20) 92180 conv6_2[0][0]
conv7_2_mbox_loc (Conv2D) (None, 5, 5, 20) 46100 conv7_2[0][0]
conv8_2_mbox_loc (Conv2D) (None, 3, 3, 16) 36880 conv8_2[0][0]
conv9_2_mbox_loc (Conv2D) (None, 1, 1, 16) 36880 conv9_2[0][0]
conv4_3_norm_mbox_conf_reshape (None, 5776, 1055) 0 conv4_3_norm_mbox_conf[0][0]
fc7_mbox_conf_reshape (Reshape) (None, 1805, 1055) 0 fc7_mbox_conf[0][0]
conv6_2_mbox_conf_reshape (Resh (None, 500, 1055) 0 conv6_2_mbox_conf[0][0]
conv7_2_mbox_conf_reshape (Resh (None, 125, 1055) 0 conv7_2_mbox_conf[0][0]
conv8_2_mbox_conf_reshape (Resh (None, 36, 1055) 0 conv8_2_mbox_conf[0][0]
conv9_2_mbox_conf_reshape (Resh (None, 4, 1055) 0 conv9_2_mbox_conf[0][0]
conv4_3_norm_mbox_priorbox (Anc (None, 38, 38, 4, 8) 0 conv4_3_norm_mbox_loc[0][0]
fc7_mbox_priorbox (AnchorBoxes) (None, 19, 19, 5, 8) 0 fc7_mbox_loc[0][0]
conv6_2_mbox_priorbox (AnchorBo (None, 10, 10, 5, 8) 0 conv6_2_mbox_loc[0][0]
conv7_2_mbox_priorbox (AnchorBo (None, 5, 5, 5, 8) 0 conv7_2_mbox_loc[0][0]
conv8_2_mbox_priorbox (AnchorBo (None, 3, 3, 4, 8) 0 conv8_2_mbox_loc[0][0]
conv9_2_mbox_priorbox (AnchorBo (None, 1, 1, 4, 8) 0 conv9_2_mbox_loc[0][0]
mbox_conf (Concatenate) (None, 8246, 1055) 0 conv4_3_norm_mbox_conf_reshape[0] fc7_mbox_conf_reshape[0][0] conv6_2_mbox_conf_reshape[0][0] conv7_2_mbox_conf_reshape[0][0] conv8_2_mbox_conf_reshape[0][0] conv9_2_mbox_conf_reshape[0][0]
conv4_3_norm_mbox_loc_reshape ( (None, 5776, 4) 0 conv4_3_norm_mbox_loc[0][0]
fc7_mbox_loc_reshape (Reshape) (None, 1805, 4) 0 fc7_mbox_loc[0][0]
conv6_2_mbox_loc_reshape (Resha (None, 500, 4) 0 conv6_2_mbox_loc[0][0]
conv7_2_mbox_loc_reshape (Resha (None, 125, 4) 0 conv7_2_mbox_loc[0][0]
conv8_2_mbox_loc_reshape (Resha (None, 36, 4) 0 conv8_2_mbox_loc[0][0]
conv9_2_mbox_loc_reshape (Resha (None, 4, 4) 0 conv9_2_mbox_loc[0][0]
conv4_3_norm_mbox_priorbox_resh (None, 5776, 8) 0 conv4_3_norm_mbox_priorbox[0][0]
fc7_mbox_priorbox_reshape (Resh (None, 1805, 8) 0 fc7_mbox_priorbox[0][0]
conv6_2_mbox_priorbox_reshape ( (None, 500, 8) 0 conv6_2_mbox_priorbox[0][0]
conv7_2_mbox_priorbox_reshape ( (None, 125, 8) 0 conv7_2_mbox_priorbox[0][0]
conv8_2_mbox_priorbox_reshape ( (None, 36, 8) 0 conv8_2_mbox_priorbox[0][0]
conv9_2_mbox_priorbox_reshape ( (None, 4, 8) 0 conv9_2_mbox_priorbox[0][0]
mbox_conf_softmax (Activation) (None, 8246, 1055) 0 mbox_conf[0][0]
mbox_loc (Concatenate) (None, 8246, 4) 0 conv4_3_norm_mbox_loc_reshape[0][ fc7_mbox_loc_reshape[0][0] conv6_2_mbox_loc_reshape[0][0] conv7_2_mbox_loc_reshape[0][0] conv8_2_mbox_loc_reshape[0][0] conv9_2_mbox_loc_reshape[0][0]
mbox_priorbox (Concatenate) (None, 8246, 8) 0 conv4_3_norm_mbox_priorbox_reshap fc7_mbox_priorbox_reshape[0][0] conv6_2_mbox_priorbox_reshape[0][ conv7_2_mbox_priorbox_reshape[0][ conv8_2_mbox_priorbox_reshape[0][ conv9_2_mbox_priorbox_reshape[0][
predictions (Concatenate) (None, 8246, 1067) 0 mbox_conf_softmax[0][0] mbox_loc[0][0] mbox_priorbox[0][0]
decoded_predictions (DecodeDete (None, 200, 6) 0 predictions[0][0]
Total params: 147,409,265 Trainable params: 147,409,265 Non-trainable params: 0 `
Do you need any additional data or maybe it’s kind of known issue ? Thank you & best regards
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 74 (34 by maintainers)
Sorry for the confusion… The root cause for this problem is that, in TF tensorlists (which is used inside tf.map_fn) can support dynamic element shapes and can have the shape materialized when invoking the
TensorListSetItem
kernel. But due to the restriction of TF Lite, we don’t support TensorLists with dynamic element shape (although we do some compile-time analysis to acquire the shape as much as possible) so you will need to specifically instruct the TF code to pass a shape totf.map_fn
.1.please pay attention that batch_size = None means that it can be various, this is legal value > Yes, this is true in TF. But in TF Lite to make conversion work as expected, you will need to pass a concrete shape argument for
fn_output_signature
. Also it doesn’t seem [batch_size, 6] is the correct output shape, since you applied a padding of 400 to the first dimension (and this is whyconcatenation
is complaining about dimension mismatch). 2. nms_max_output_size controls max number of detections that can be found on one input… So limiting it to 1 it’s not a solution > Yes I agree with you, sorry for the confusion in the original comment and I wasn’t suggesting to change this to 1. The main idea here is to supplyfn_output_signature
with a static shape so that we can walk-around the Tensorlist issues as I mentioned above.We noticed the design restriction of TF Lite tensorlist and we are also improving it. I think very soon we can support running those tensorlist ops in flex mode and that will remove much of the pain here.
Hi @haozha111 , thank you for such an elaborated and professional answer - now I can understand what the root-cause. I’ll check the batch size padding - looks weird but I have no problem set batch size to be 1 Regarding nm_max_output_size, I ll take a look on this (fn_output_signature ) function
Thank you & best regards
BTW what is ETA for those tensorelist ops ? (just to understand the timelines)
@abattery @haozha111 @mihaimaruseac any news ?
Hey, is there some updates for MaxxTr? Its’ve been a month since the first question. Is there other options of getting support / help? please advise
@abattery @haozha111 hello, any news ?