tensorflow: quantization deeplabv3(mobielentv2) error

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
  • TensorFlow installed from (source or binary):source
  • TensorFlow version (use command below):1.9.0
  • Python version: 2.7.3
  • Bazel version (if compiling from source):0.12.0
  • GCC/Compiler version (if compiling from source):c++11
  • CUDA/cuDNN version:7.5.18
  • GPU model and memory:TITAN,12GB
  • Exact command to reproduce:N/A

Describe the problem

I want to train a quantization deeplabv3+(mobienetv2) model, use “mobilenetv2_coco_voc_trainaug” from https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md.

Source code / logs

I add tf.contrib.quantize.create_training_graph(quant_delay=0) in line 315 https://github.com/tensorflow/models/blob/master/research/deeplab/train.py but I got the error like below:

INFO:tensorflow:Training on train set
Traceback (most recent call last):
  File "deeplab/train.py", line 359, in <module>
    tf.app.run()
  File "/home/liufang/deeplab_venv/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "deeplab/train.py", line 281, in main
    tf.contrib.quantize.create_training_graph(quant_delay=0)
  File "/home/liufang/deeplab_venv/local/lib/python2.7/site-packages/tensorflow/contrib/quantize/python/quantize_graph.py", line 112, in create_training_graph
    freeze_bn_delay=freeze_bn_delay)
  File "/home/liufang/deeplab_venv/local/lib/python2.7/site-packages/tensorflow/contrib/quantize/python/quantize_graph.py", line 66, in _create_graph
    is_training=is_training)
  File "/home/liufang/deeplab_venv/local/lib/python2.7/site-packages/tensorflow/contrib/quantize/python/fold_batch_norms.py", line 54, in FoldBatchNorms
    graph, is_training, freeze_batch_norm_delay=freeze_batch_norm_delay)
  File "/home/liufang/deeplab_venv/local/lib/python2.7/site-packages/tensorflow/contrib/quantize/python/fold_batch_norms.py", line 100, in _FoldFusedBatchNorms
    fused_batch_norm=True))
  File "/home/liufang/deeplab_venv/local/lib/python2.7/site-packages/tensorflow/contrib/quantize/python/fold_batch_norms.py", line 323, in _ComputeBatchNormCorrections
    match.moving_variance_tensor + match.batch_epsilon)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'float'

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 3
  • Comments: 30 (2 by maintainers)

Most upvoted comments

I was able to solve this issue by upgrading tensorflow version from 1.15 to 1.15.3.

I had encountered the same error too. I resolve it by replacing tf.layers.BatchNormalization with tf.contrib.slim.batch_norm. So the arrangement of parameters in the graph matches what _FindFusedBatchNorms is looking for.

PS: _FindFusedBatchNorms is located in tensorflow\contrib\quantize\python\fold_batch_norms.py

Hi, I also have the same issue here. I use the MobileNet v2 Colab Example and try to quantize the model with create_training_graph, then I get the same error. Is there any suggestion on this issue? Thanks!

@saeed68gm I find a solution, you can use "–fine_tune_batch_norm=true " instead of “–fine_tune_batch_norm=false”. This error will disappear.

@saeed68gm I will also check silm.mobilenetv2.py. If you have some progress, please comment here, we can discuss about this problem. Thank you very much!

@saeed68gm I’m getting this error with my own project, but the issue is the same. You can look into tensorflow\contrib\quantize\python\fold_batch_norms.py to compare the graph structure graph_matcher is looking for and what silm.mobilenetv2.py created. I had taken a quick glance at silm.mobilenetv2.py 's graph but doesn’t see anything wrong with it.

@suharshs But I saw tf.contrib.slim.batch_norm in slim.mobilenetv2.py, I still meet this issue.

Indeed it seems that tf.layers.BatchNormalization isn’t getting matched 😦 We will take a look! Thanks!