onnx2tf: [YOLOX-TI] ERROR: onnx_op_name: /head/ScatterND

Issue Type

Others

onnx2tf version number

1.8.1

onnx version number

1.13.1

tensorflow version number

2.12.0

Download URL for ONNX

yolox_nano_ti_lite_26p1_41p8.zip

Parameter Replacement JSON

{
    "format_version": 1,
    "operations": [
        {
            "op_name": "/head/ScatterND",
            "param_target": "inputs",
            "param_name": "/head/Concat_1_output_0",
            "values": [1,85,52,52]
        }
    ]
}

Description

Hi @PINTO0309. After our lengthy discussion regarding INT8 YOLOX export I decided to try out Ti’s version of these models (https://github.com/TexasInstruments/edgeai-yolox/tree/main/pretrained_models). It looked to me that you manged to INT8-export those so maybe you could provide some hints 😄. I just downloaded the ONNX version of YOLOX-nano. For this model, the following fails:

onnx2tf -i ./yolox_nano.onnx -o yolox_nano_saved_model

The error I get:

ERROR: input_onnx_file_path: /datadrive/mikel/edgeai-yolox/yolox_nano.onnx
ERROR: onnx_op_name: /head/ScatterND
ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement
ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again.
ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option.
ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.
  1. Research
  2. Export error
  3. I tried to overwrite the values of the parameter by the replacement json provided above with no luck
  4. Project need
  5. Operation that fails can be found in the image below: Screenshot from 2023-03-24 10-37-02

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 70 (70 by maintainers)

Most upvoted comments

This Concat is not necessary by nature and has no benefit for the model quantization, so I think we don’t need go any deeper with this.

Agree, let’s close this. Enough experimentation on this topic 😄 . Again, thank you both @motokimura, @PINTO0309 for time and guidance during this quantization journey. I learnt a lot, hopefully you got something out of the experiment results posted here as well 🙏

@mikel-brostrom As for the accuracy degradation of YOLOX integer quantization, I think it may be due to the distribution mismatch of xywh and score values.

Just before the last Concat, xywh seems to have a distribution of (min, max)~(0.0, 416.0). On the other hand, scores have a much narrower distribution of (min, max) = (0.0, 1.0) because of sigmoid.

In TFLite quantization, activation is quantized in per-tensor manner. That is, the OR distribution of xywh and scores, (min, max) = (0.0, 416.0), is mapped to integer values of (min, max) = (0, 255) after the Concat. As a result, even if the score is 1.0, after quantization it is mapped to: int(1.0 / 416 * 255) = int(0.61) = 0, resulting in all scores being zero!

A possible solution is to divide xywh tensors by the image size (416) to keep it in the range (min, max) ~ (0.0, 1.0) and then concat with the score tensor so that scores are not “collapsed” due to the per-tensor quantization.

The same workaround is done in YOLOv5: https://github.com/ultralytics/yolov5/blob/b96f35ce75effc96f1a20efddd836fa17501b4f5/models/tf.py#L307-L310

スクリーンショット 2023-03-25 1 14 48

Great that we get this into YOLOv8 as well @motokimura! Thank you both for this joint effort ❤️

Model size mAPval
0.5:0.95
mAPval
0.5
size calibration images
YOLOX-TI-nano TFLite FP32 416 0.261 0.418 8.7M N/A
YOLOX-TI-nano TFLite INT8 416 0.242 0.408 2.4M 200
YOLOX-TI-nano TFLite INT8 416 0.243 0.408 2.4M 800

Going for a full COCO eval now 🚀

@PINTO0309 🚀 ! I just implemented what you explained here: https://github.com/PINTO0309/onnx2tf/issues/269#issuecomment-1488349530. What is the rationale behind this?

Model size mAPval
0.5:0.95
mAPval
0.5
size xywh model output calibration images
YOLOX-TI-nano TFLite FP32 416 0.390 0.653 8.7M [0, 1] N/A
YOLOX-TI-nano TFLite FP16 416 0.390 0.653 4.4M [0, 1] N/A
YOLOX-TI-nano TFLite full_integer_quant 416 0.362 0.641 2.4M [0, 1] 200
YOLOX-TI-nano TFLite full_integer_quant_with_int16_act 416 0 0 2.4M [0, 1] 200
YOLOX-TI-nano TFLite dynamic_range_quant 416 0.389 0.652 2.4M [0, 1] 200
YOLOX-TI-nano TFLite integer_quant 416 0.362 0.641 2.4M [0, 1] 200
YOLOX-TI-nano TFLite integer_quant_with_int16_act 416 0.389 0.672 2.4M [0, 1] 200

Just a hunch on my part, but if you do not Concat at the end, maybe there will be no accuracy degradation. I will have to try it out to find out. In the first place, I feel that the difference in value ranges is too large. Then Concat may not be relevant.

Ref: https://github.com/PINTO0309/onnx2tf/issues/269#issuecomment-1483090981 image

By the way, _int16_act seems to be an experimental implementation of TFLite, so there are still many bugs or unsupported OPs. https://www.tensorflow.org/lite/performance/post_training_integer_quant_16x8

TensorFlow Lite now supports converting activations to 16-bit integer values
and weights to 8-bit integer values during model conversion from TensorFlow 
to TensorFlow Lite's flat buffer format. We refer to this mode as the "16x8 quantization mode".
This mode can improve accuracy of the quantized model significantly, 
when activations are sensitive to the quantization, while still achieving almost 3-4x reduction 
in model size. Moreover, this fully quantized model can be consumed by integer-only hardware accelerators.

@mikel-brostrom Thanks for sharing your results! https://github.com/PINTO0309/onnx2tf/issues/269#issuecomment-1486969872 The accuracy degradation because of the decoder is interesting…

You may find something if you compare the fp32/int8 TFLite final outputs. Even without onnx2tf’s new feature, you can do it by saving output arrays into npy files and then compare them.

The figure below is the one when I quantized YOLOv3. Left shows the distribution of x channel, and right shows the distribution of w channel. Orange is float, and blue is quantized.

figure_04

In YOLOv3 case above, w channel has large quantization error. If you can visualize the output distribution like this, we may find which channel (x, y, w, h, and/or, class) causes this accuracy deguradation.

A workaround has been implemented to avoid ScatterND shape mismatch errors as much as possible. In v1.8.3, the conversion succeeds as is even if ScatterND is included, and the accuracy check has been improved to no problem.

However, since NMS is included in the post-processing, accuracy verification with random data does not display very good results. For an accurate accuracy check, it is better to use a still image of the assumption used in the inference. This is because accuracy checks using random data may result in zero final output data counts.

https://github.com/PINTO0309/onnx2tf/releases/tag/1.8.3

onnx2tf -i xxx.onnx

image

In any case, ScatterND converts to a very verbose OP, so it is still better to create a model that replaces it with Slice as much as possible.

image

@mikel-brostrom As for the accuracy degradation of your static quantized int8 model, I’m concerned your calibration setting might not be correct.

In calibration, representative images called calibration data is input to the model in order to observe the activation value range of each layer. Based on the observed activation range, the quantization parameters (scale and offset) which are used to map fp32 activations into int8 are computed for each layer (all of these were done in onnx2tf). So, if the calibration data is not correct, these quantization parameters are not computed properly, resulting catastrophic accuracy degradation of the quantized model.

Since YOLOX models expects unnormalized pixel values from 0 to 255 as the input, I generated calibration data from COCO train images without normalization [code link]. Then, I passed it to onnx2tf with -qcind option without normalization as written in README:

onnx2tf -i yolox_nano_ti_lite.onnx -oiqt -qcind images calib_data_416x416_n200.npy "[[[[0,0,0]]]]" "[[[[1,1,1]]]]"

Did you pass calibration data to onnx2tf like I did? If -qcind is not specified, onnx2tf seems to use sample calibration data as described here. This sample calibration data seems to be normalized so that the pixel values are from 0 to 1 as written here and to be further normalized ImageNet mean and std. As YOLOX models do not expect such normalized pixel values, this causes the problem in the calibration.

I won’t have time to check this out today @motokimura. But will report back tomorrow with my findings 😄. Thanks again for your time and guidance

Sorry for my late reply. I spent most of the day creating the benchmark result plot for yolox on the specific hardware I am using. I added delegate results as well. hexagon is skipped as the target device has no qualcomm chip. INT8 models don’t get a boost on this chip due to the lack of an INT8 ISA. GPU boosts make sense as the EXYNOS9810 contains a Mali-G72MP18 GPU, but inference speed is quite similar to using XNNPACK with 4 threads.

Any idea why the memory footprint for the GPU delegate is so big compared to the others? Specially for the quantized one?

Screenshot from 2023-03-27 17-00-00 Exynos 9810 (ARM Mali-G72MP18 GPU). Released: March 01, 2018

Screenshot from 2023-03-28 09-48-20 Exynos 7870 (ARM Mali-T830 MP2 GPU). Released: February 17, 2016

I am very interested. Probably other engineers besides myself as well.

Today and tomorrow will involve travel to distant places for work, which will slow down research and work.

Incidentally, Motoki seems to have succeeded in maintaining accuracy with INT8 quantization.

The model performance did not decrease after the changes and for the first time I got results on one of the quantized models (dynamic_range_quant).

Model size mAPval
0.5:0.95
mAPval
0.5
size
YOLOX-TI-nano ONNX (original model) 416 0.261 0.418 8.7M
YOLOX-TI-nano ONNX (no ScatterND) 416 0.261 0.418 8.7M
YOLOX-nano TFLite FP16 416 0.261 0.418 4.4M
YOLOX-nano TFLite FP32 416 0.261 0.418 8.7M
YOLOX-nano TFLite full_integer_quant 416 0 0 2.3M
YOLOX-nano TFLite dynamic_range_quant 416 0.249 0.410 2.3M
YOLOX-nano TFLite integer_quant 416 0 0 2.3M

But still nothing for the INT ones though…

Ok. As I didn’t see ScatterND in the original model, I checked what the differences where. I found out that this

def meshgrid(*tensors):
    if _TORCH_VER >= [1, 10]:
        return torch.meshgrid(*tensors, indexing="ij")
    else:
        return torch.meshgrid(*tensors)
 

def decode_outputs(self, outputs, dtype):
        grids = []
        strides = []
        for (hsize, wsize), stride in zip(self.hw, self.strides):
            yv, xv = meshgrid([torch.arange(hsize), torch.arange(wsize)])
            grid = torch.stack((xv, yv), 2).view(1, -1, 2)
            grids.append(grid)
            shape = grid.shape[:2]
            strides.append(torch.full((*shape, 1), stride))
 
        grids = torch.cat(grids, dim=1).type(dtype)
        strides = torch.cat(strides, dim=1).type(dtype)
 
        outputs = torch.cat([
            (outputs[..., 0:2] + grids) * strides,
            torch.exp(outputs[..., 2:4]) * strides,
            outputs[..., 4:]
        ], dim=-1)
        return outputs

gives:

Screenshot from 2023-03-24 11-49-44

While this:

def (self, outputs, dtype):
        grids = []
        strides = []
        for (hsize, wsize), stride in zip(self.hw, self.strides):
            yv, xv = torch.meshgrid([torch.arange(hsize), torch.arange(wsize)])
            grid = torch.stack((xv, yv), 2).view(1, -1, 2)
            grids.append(grid)
            shape = grid.shape[:2]
            strides.append(torch.full((*shape, 1), stride))
 
        grids = torch.cat(grids, dim=1).type(dtype)
        strides = torch.cat(strides, dim=1).type(dtype)
 
        outputs[..., :2] = (outputs[..., :2] + grids) * strides
        outputs[..., 2:4] = torch.exp(outputs[..., 2:4]) * strides
        return outputs

gives:

Screenshot from 2023-03-24 11-49-26

This as well as some other minor fixes make it possible to get rid of ScatterND completely.

At this point I have no idea more than this comment about the quantization of Concat and what kind of quantization errors are happening inside actually… This Concat is not necessary by nature and has no benefit for the model quantization, so I think we don’t need go any deeper with this.

All I can say at this point is that tensors with very different value ranges should not be concatenated, especially in post-processing of the model.

Thank you for doing the experiment and sharing your results!

Interesting. It actually made it worse…

Model size mAPval
0.5:0.95
mAPval
0.5
size calibration images
YOLOX-TI-nano TFLite XY, WH, PROBS OUTPUT 416 0.242 0.408 2.4M 8
YOLOX-TI-nano SINGLE OUTPUT 416 0.062 0.229 2.4M 8
YOLOX-TI-nano SINGLE OUTPUT (Clamped xywh) 416 0.028 0.103 2.4M 8

Yup, sorry @motokimura, that’s a typo. It is

outputs[:, :, 0:4] = outputs[:, :, 0:4] * 416

There is no part of the model left to explain in more detail than Motoki’s explanation, but again, take a good look at the quantization parameters around the final output of the model. I think you can see why Concat is a bad idea.

All 1.7974882125854492 * (q + 128)

The values diverge when inverse quantization (Dequantize) is performed.

onnx2tf -i yolox_nano_no_scatternd.onnx -oiqt -qt per-tensor

image image

Perhaps that is why TI used ScatterND.

I will close this issue once the original problem has been solved and the INT8 quantization problem seems to have been resolved.

congratulations! 👍

It looks fine to me.

In/out quantization from top-left to bottom-right of the operations you pointed at:

quantization: -3.1056954860687256 ≤ 0.00014265520439948887 * q ≤ 4.674383163452148
quantization: -3.1056954860687256 ≤ 0.00014265520439948887 * q ≤ 4.674383163452148

quantization: -2.3114538192749023 ≤ 0.00010453650611452758 * q ≤ 3.4253478050231934
quantization: 0.00014265520439948887 * q

quantization: -2.2470905780792236 ≤ 0.00011867172725033015 * q ≤ 3.888516426086426
quantization: 0.00014265520439948887 * q

quantization: 0.00014265520439948887 * q
quantization: -3.1056954860687256 ≤ 0.00014265520439948887 * q ≤ 4.674383163452148

Output looks like this now;

Screenshot from 2023-03-29 13-42-03

Errors below 1e-4 can occur in almost any model due to differences in rounding, truncation, and rounding up criteria between ONNX’s internal processing and TensorFlow’s internal processing.

Get it!

Also, this tool does not have the ability to check INT8 accuracy, only Float32 accuracy. Therefore, it should be noted that whether or not Unmached appears is the result of the precision check in Float32, regardless of whether it was quantized to INT8 or not.

Good to know 😄

However, I am very concerned about the zero mAP in the last benchmark result. eyes

Will double check everything tomorrow just to make sure there are no errors on my side

I explained it in a very simplified manner because it would be very complicated to explain in detail. You need to understand how onnx2tf checks the final and intermediate outputs.

Once you understand the principles of the accuracy checker, you will realize that minor errors can always occur, even if the model transformation is perfectly normal.

  1. ONNX is NCHW and TensorFlow is NHWC.
  2. Therefore, the intermediate outputs of the model will always be inconsistent with the shape of the tensor.
  3. When comparing the output of ONNX and TensorFlow, the absolute error of the tensor is measured by forcing it to conform to the tensor shape of ONNX.
  4. Errors below 1e-4 can occur in almost any model due to differences in rounding, truncation, and rounding up criteria between ONNX’s internal processing and TensorFlow’s internal processing.
  5. Therefore, when comparing model accuracy, it is best to make sure that the final output is Matches.
    • Final output
      INFO: onnx_output_name: output tf_output_name: tf.concat_17/concat:0 shape: (1, 3549, 85) dtype: float32 validate_result:  Matches 
      
  6. The reason why onnx2tf dares to have the ability to compare the errors of all operations is that onnx2tf sometimes makes mistakes in the way it transposes from NCHW to NHWC. This is an auxiliary function to quickly find out where unacceptable errors occur in order to make a final check for errors in the tool’s conversion results by visual inspection.
  7. Also, this tool does not have the ability to check INT8 accuracy, only Float32 accuracy. Therefore, it should be noted that whether or not Unmached appears is the result of the precision check in Float32, regardless of whether it was quantized to INT8 or not.

However, I am very concerned about the zero mAP in the last benchmark result. 👀

image

Errors of less than 1e-3 hardly make any difference to the accuracy of the model. Errors introduced by Mul can be caused by slight differences in fraction handling between ONNX and TensorFlow. Ignoring it will only cause a difference that is not noticeable to the human eye.

I tried a complete model export (including --export-det) following @motokimura’s instructions. I am aware of the fact that the post-processing step induces large errors on INT quantized models as showed here: https://github.com/PINTO0309/onnx2tf/issues/269#issuecomment-1484182307. Despite of all this I decided to proceed to check what performance I would get, as I want to do as little post-processing outside of the model as possible. These are my results:

Model size mAPval
0.5:0.95
mAPval
0.5
size xywh output calibration images
YOLOX-TI-nano ONNX (original model) 416 0.261 0.418 8.7M [0, 416] N/A
YOLOX-TI-nano ONNX (no ScatterND) 416 0.261 0.418 8.7M [0, 416] N/A
YOLOX-nano TFLite FP32 416 0.261 0.418 8.7M [0, 416] N/A
YOLOX-nano TFLite FP16 416 0.261 0.418 4.4M [0, 416] N/A
YOLOX-nano TFLite full_integer_quant 416 0 0 2.4M [0, 1] 0
YOLOX-nano TFLite full_integer_quant 416 0.039 0.115 2.4M [0, 1] 200
YOLOX-nano TFLite full_integer_quant 416 0.033 0.098 2.4M [0, 1] 600
YOLOX-nano TFLite dynamic_range_quant 416 0.259 0.416 2.4M [0, 1] 200
YOLOX-nano TFLite dynamic_range_quant 416 0.259 0.416 2.4M [0, 1] 600
YOLOX-nano TFLite integer_quant 416 0.039 0.115 2.4M [0, 1] 200
YOLOX-nano TFLite integer_quant 416 0.033 0.098 2.4M [0, 1] 600
YOLOX-nano TFLite integer_quant 416 0 0 2.4M [0, 416] 200

Sorry for all the experiment results I am dropping here. I hope they can help somebody going through a similar kind of processes. Without the --export-det I get the same results as @motokimura 😄

I’m going to share how I quantized the nano model tonight. I’ve not yet done qualitative evaluation of the quantized model, but the detection result looks OK.

I compiled the benchmark binary for android_arm64. The device has a Exynos9810 which is arm 64-bit. It contains a Mali-G72MP18 GPU. However, I am running the model without GPU accelerators, so the INT8 model must be running on CPU. The CPU got released 2018 so that may explain why the quantized model is that slow…

Cortex-A55 may be a bit old architecture. I am not very familiar with the details of the CPU architecture, but I think Coretex-A7x may have faster inference because of the implementation of faster operations with Neon instructions. Performance seems to vary considerably depending on whether Arm NN can be called from TFLite.

Apparently the benchmark binary can be run with nnapi delegate by --use_nnapi=true and with GPU delegate by --use_gpu=true (source). This will give a better understanding of how this model actually performs with hardware accelerators. If anybody is interested I can upload those results as well 😄

  • Here is a video of me running an INT8 quantized SSD on a RaspberryPi4 CPU (Debian 64bit) alone in 2020. https://www.youtube.com/watch?v=bd3lTBAYIq4

  • RaspberryPi4 (CPU only) + Python3.7 + Tensorflow Lite + MobileNetV2-SSDLite + Sync + MP4 640x360

  • 15FPS (about 66ms/pred) image

I compiled the benchmark binary for android_arm64. The device has a Exynos9810 which is arm 64-bit. It contains a Mali-G72MP18 GPU. However, I am running the model without GPU accelerators, so the INT8 model must be running on CPU. The CPU got released 2018 so that may explain why the quantized model is that slow…

Cortex-A55 may be a bit old architecture. I am not very familiar with the details of the CPU architecture, but I think Coretex-A7x may have faster inference because of the implementation of faster operations with Neon instructions. Performance seems to vary considerably depending on whether Arm NN can be called from TFLite.

First, let me tell you that your results will vary greatly depending on the architecture of the CPU you are using for your verification. If you are using an Intel x64(x86) or AMD x64(x86) architecture CPU, the Float32 model should be able to reason about 10 times faster than the INT8 model. INT8 models are very slow on the x64 architecture. Perhaps the RaspberryPi’s ARM64 CPU 4 threads would be 10 times faster. The keyword XNNPACK is a good way to search for information. In the case of Intel’s x64 architecture, CPUs of the 10th generation or later differ from CPUs of the 9th generation or earlier in the presence or absence of an optimization mechanism for processing Integer. If you are using a 10th generation or later CPU, it should run about 20% faster.

Therefore, when benchmarking using benchmarking tools, it is recommended to try to do so on ARM64 devices.

I compiled the benchmark binary for android_arm64. The device has a Exynos9810 which is arm 64-bit. It contains a Mali-G72MP18 GPU. However, I am running the model without GPU accelerators, so the INT8 model must be running on CPU. The CPU got released 2018 so that may explain why the quantized model is that slow…

I just cut the model at the point you suggested by:

onnx2tf -i /datadrive/mikel/yolox_tflite_export/yolox_nano.onnx -b 1 -cotof -cotoa 1e-1 -onimc /head/Concat_6_output_0

But I get the following error:

File "/datadrive/mikel/yolox_tflite_export/env/lib/python3.8/site-packages/onnx2tf/utils/common_functions.py", line 3071, in onnx_tf_tensor_validation
    onnx_tensor_shape = onnx_tensor.shape
AttributeError: 'NoneType' object has no attribute 'shape'

I couldn’t find a similar issue and I had the same problem when I tried to cut YOLOX in our previous discussion. I probably misinterpreted how the tool is supposed to be used…

Anyways, I am using the official TFLite benchmark tool for the exported models and on the specific android device i I am running this on I get that the Float32 models is much faster that the dynamically quantized one.

First, let me tell you that your results will vary greatly depending on the architecture of the CPU you are using for your verification. If you are using an Intel x64(x86) or AMD x64(x86) architecture CPU, the Float32 model should be able to reason about 10 times faster than the INT8 model. INT8 models are very slow on the x64 architecture. Perhaps the RaspberryPi’s ARM64 CPU 4 threads would be 10 times faster. The keyword XNNPACK is a good way to search for information. In the case of Intel’s x64 architecture, CPUs of the 10th generation or later differ from CPUs of the 9th generation or earlier in the presence or absence of an optimization mechanism for processing Integer. If you are using a 10th generation or later CPU, it should run about 20% faster.

Therefore, when benchmarking using benchmarking tools, it is recommended to try to do so on ARM64 devices.

The benchmarking in the discussion on the ultralytics thread is not appropriate.

Next, let’s look at dynamic range quantization. My tool does per-channel quantization by default. This is due to the TFLiteConverter specification. per-channel quantization calculates the quantization range for each element of the tensor, which reduces the accuracy degradation and, at the same time, increases the cost of calculating the quantization range, which slows down the inference a little. Also, most of the current edge devices in the world are not optimized for per-channel quantization. For example, EdgeTPU only supports per-tensor quantization. Therefore, if quantization is to be performed with the assumption that the model will be put to practical use in the future, it is recommended that per-tensor quantization be performed during the transformation as follows.

onnx2tf -i xxxx.onnx -oiqt -qt per-tensor
  • per-channel quant image
  • per-tensor quant image

Next, we discuss post-quantization accuracy degradation. I think motoki’s point is mostly correct. I think you should first try to split the model at the red line and see how the accuracy changes.

image

If the Sigmoid in this position does not affect the accuracy, it should work. It is better to think about complex problems by breaking them down into smaller problems without being too hasty.

image

hmm… As PINTO pointed out, it may be better to compare int8 and float model activations before the decoder part.

https://github.com/PINTO0309/onnx2tf/issues/269#issuecomment-1482738822

It may be helpful to export onnx without ‘–export-det’ option and compare the int8 and float outputs.

Feel free to play around with it

yolox_nano_no_scatternd.zip

😄