mace: Got wrong output from Softmax layer when upgrading to MACE v0.11.0-rc0

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): registry.cn-hangzhou.aliyuncs.com/xiaomimace/mace-dev-lite:latest
  • NDK version(e.g., 15c): 18b
  • GCC version(if compiling for host, e.g., 5.4.0):
  • MACE version (Use the command: git describe --long --tags): v0.11.0-rc0
  • Python version(2.7): 2.7
  • Bazel version (e.g., 0.13.0):

Model deploy file (*.yml)

library_name: mylib
target_abis: [arm64-v8a]
model_graph_format: code
model_data_format: code
models:
  mobilenet_v1:
    platform: tensorflow
    model_file_path: https://cnbj1.fds.api.xiaomi.com/mace/miai-models/mobilenet-v1/mobilenet-v1-1.0.pb
    model_sha256_checksum: 71b10f540ece33c49a7b51f5d4095fc9bd78ce46ebf0300487b2ee23d71294e6
    subgraphs:
      - input_tensors:
          - input
        input_shapes:
          - 1,224,224,3
        output_tensors:
          - MobilenetV1/Predictions/Reshape_1
        output_shapes:
          - 1,1001
    runtime: cpu+gpu
    limit_opencl_kernel_time: 0
    obfuscate: 0
    winograd: 0
  mobilenet_v2:
    platform: tensorflow
    model_file_path: https://cnbj1.fds.api.xiaomi.com/mace/miai-models/mobilenet-v2/mobilenet-v2-1.0.pb
    model_sha256_checksum: 369f9a5f38f3c15b4311c1c84c032ce868da9f371b5f78c13d3ea3c537389bb4
    subgraphs:
      - input_tensors:
          - input
        input_shapes:
          - 1,224,224,3
        output_tensors:
          - MobilenetV2/Predictions/Reshape_1
        output_shapes:
          - 1,1001
    runtime: cpu+gpu
    limit_opencl_kernel_time: 0
    obfuscate: 0
    winograd: 0
  pad:
    platform: tensorflow
    model_file_path: /models/pad/frozen_model.pb
    model_sha256_checksum: 0a59298fce4890d26480f8378547e0bd06fcaaab7eeb96a183307ac6616a4c27
    subgraphs:
      - input_tensors:
          - input_1_2
        input_shapes:
          - 1,224,224,3
        output_tensors:
          - lambda_1_1/Softmax:0
        output_shapes:
          - 1,1,1,2
    obfuscate: 0
    runtime: cpu+gpu
    winograd: 0

Describe the problem

My ‘pad’ model is the modified MobilenetV1. I installed the compiled ‘pad’ model in example Android application. I found that the output array is constant array of [1.0, 1.0] which impossible for softmax function. It should have provided the probabilities of class0 and class1. During model conversion, I noticed that there were 4 layers added before the softmax layer comparing to the model conversion in MACE v0.10.0. In addition, the model works fine if I use MACE v0.10.0.

To Reproduce

Steps to reproduce the problem:

1. cd /mace/mace/example/android
2. ./build.sh static 

Error information / logs

Here is final ops I got from ‘pad’ model conversion in MACE v0.11.0-rc0

Final ops:
conv1_pad_2/Pad (Pad, index:0): [[1, 225, 225, 3]]
conv1_relu_2/Relu6 (Conv2D, index:1): [[1, 112, 112, 32]]
conv_dw_1_relu_2/Relu6 (DepthwiseConv2d, index:2): [[1, 112, 112, 32]]
conv_pw_1_relu_2/Relu6 (Conv2D, index:3): [[1, 112, 112, 64]]
conv_pad_2_2/Pad (Pad, index:4): [[1, 113, 113, 64]]
conv_dw_2_relu_2/Relu6 (DepthwiseConv2d, index:5): [[1, 56, 56, 64]]
conv_pw_2_relu_2/Relu6 (Conv2D, index:6): [[1, 56, 56, 128]]
conv_dw_3_relu_2/Relu6 (DepthwiseConv2d, index:7): [[1, 56, 56, 128]]
conv_pw_3_relu_2/Relu6 (Conv2D, index:8): [[1, 56, 56, 128]]
conv_pad_4_2/Pad (Pad, index:9): [[1, 57, 57, 128]]
conv_dw_4_relu_2/Relu6 (DepthwiseConv2d, index:10): [[1, 28, 28, 128]]
conv_pw_4_relu_2/Relu6 (Conv2D, index:11): [[1, 28, 28, 256]]
conv_dw_5_relu_2/Relu6 (DepthwiseConv2d, index:12): [[1, 28, 28, 256]]
conv_pw_5_relu_2/Relu6 (Conv2D, index:13): [[1, 28, 28, 256]]
conv_pad_6_2/Pad (Pad, index:14): [[1, 29, 29, 256]]
conv_dw_6_relu_2/Relu6 (DepthwiseConv2d, index:15): [[1, 14, 14, 256]]
conv_pw_6_relu_2/Relu6 (Conv2D, index:16): [[1, 14, 14, 512]]
conv_dw_7_relu_2/Relu6 (DepthwiseConv2d, index:17): [[1, 14, 14, 512]]
conv_pw_7_relu_2/Relu6 (Conv2D, index:18): [[1, 14, 14, 512]]
conv_dw_8_relu_2/Relu6 (DepthwiseConv2d, index:19): [[1, 14, 14, 512]]
conv_pw_8_relu_2/Relu6 (Conv2D, index:20): [[1, 14, 14, 512]]
conv_dw_9_relu_2/Relu6 (DepthwiseConv2d, index:21): [[1, 14, 14, 512]]
conv_pw_9_relu_2/Relu6 (Conv2D, index:22): [[1, 14, 14, 512]]
conv_dw_10_relu_2/Relu6 (DepthwiseConv2d, index:23): [[1, 14, 14, 512]]
conv_pw_10_relu_2/Relu6 (Conv2D, index:24): [[1, 14, 14, 512]]
conv_dw_11_relu_2/Relu6 (DepthwiseConv2d, index:25): [[1, 14, 14, 512]]
conv_pw_11_relu_2/Relu6 (Conv2D, index:26): [[1, 14, 14, 512]]
conv_pad_12_2/Pad (Pad, index:27): [[1, 15, 15, 512]]
conv_dw_12_relu_2/Relu6 (DepthwiseConv2d, index:28): [[1, 7, 7, 512]]
conv_pw_12_relu_2/Relu6 (Conv2D, index:29): [[1, 7, 7, 1024]]
conv_dw_13_relu_2/Relu6 (DepthwiseConv2d, index:30): [[1, 7, 7, 1024]]
conv_pw_13_relu_2/Relu6 (Conv2D, index:31): [[1, 7, 7, 1024]]
max_pooling2d_1_1/MaxPool (Pooling, index:32): [[1, 1, 1, 1024]]
dense_1_2/MatMul (FullyConnected, index:33): [[1, 2]]
dense_1_2/Shape (Shape, index:34): [[4]]
dense_1_2/unstack (Unstack, index:35): [[], [], [], []]
dense_1_2/Reshape_2/shape (Stack, index:36): [[4]]
dense_1_2/Reshape_2 (Reshape, index:37): [[1, 1, 1, 2]]
lambda_1_1/Softmax (Softmax, index:38): [[1, 1, 1, 2]]

Here is final ops I got from ‘pad’ model conversion in MACE v0.10.0

Final ops:
conv1_pad_2/Pad (Pad): [[1L, 225L, 225L, 3L]]
conv1_relu_2/Relu6 (Conv2D): [[1L, 112L, 112L, 32L]]
conv_dw_1_relu_2/Relu6 (DepthwiseConv2d): [[1L, 112L, 112L, 32L]]
conv_pw_1_relu_2/Relu6 (Conv2D): [[1L, 112L, 112L, 64L]]
conv_pad_2_2/Pad (Pad): [[1L, 113L, 113L, 64L]]
conv_dw_2_relu_2/Relu6 (DepthwiseConv2d): [[1L, 56L, 56L, 64L]]
conv_pw_2_relu_2/Relu6 (Conv2D): [[1L, 56L, 56L, 128L]]
conv_dw_3_relu_2/Relu6 (DepthwiseConv2d): [[1L, 56L, 56L, 128L]]
conv_pw_3_relu_2/Relu6 (Conv2D): [[1L, 56L, 56L, 128L]]
conv_pad_4_2/Pad (Pad): [[1L, 57L, 57L, 128L]]
conv_dw_4_relu_2/Relu6 (DepthwiseConv2d): [[1L, 28L, 28L, 128L]]
conv_pw_4_relu_2/Relu6 (Conv2D): [[1L, 28L, 28L, 256L]]
conv_dw_5_relu_2/Relu6 (DepthwiseConv2d): [[1L, 28L, 28L, 256L]]
conv_pw_5_relu_2/Relu6 (Conv2D): [[1L, 28L, 28L, 256L]]
conv_pad_6_2/Pad (Pad): [[1L, 29L, 29L, 256L]]
conv_dw_6_relu_2/Relu6 (DepthwiseConv2d): [[1L, 14L, 14L, 256L]]
conv_pw_6_relu_2/Relu6 (Conv2D): [[1L, 14L, 14L, 512L]]
conv_dw_7_relu_2/Relu6 (DepthwiseConv2d): [[1L, 14L, 14L, 512L]]
conv_pw_7_relu_2/Relu6 (Conv2D): [[1L, 14L, 14L, 512L]]
conv_dw_8_relu_2/Relu6 (DepthwiseConv2d): [[1L, 14L, 14L, 512L]]
conv_pw_8_relu_2/Relu6 (Conv2D): [[1L, 14L, 14L, 512L]]
conv_dw_9_relu_2/Relu6 (DepthwiseConv2d): [[1L, 14L, 14L, 512L]]
conv_pw_9_relu_2/Relu6 (Conv2D): [[1L, 14L, 14L, 512L]]
conv_dw_10_relu_2/Relu6 (DepthwiseConv2d): [[1L, 14L, 14L, 512L]]
conv_pw_10_relu_2/Relu6 (Conv2D): [[1L, 14L, 14L, 512L]]
conv_dw_11_relu_2/Relu6 (DepthwiseConv2d): [[1L, 14L, 14L, 512L]]
conv_pw_11_relu_2/Relu6 (Conv2D): [[1L, 14L, 14L, 512L]]
conv_pad_12_2/Pad (Pad): [[1L, 15L, 15L, 512L]]
conv_dw_12_relu_2/Relu6 (DepthwiseConv2d): [[1L, 7L, 7L, 512L]]
conv_pw_12_relu_2/Relu6 (Conv2D): [[1L, 7L, 7L, 1024L]]
conv_dw_13_relu_2/Relu6 (DepthwiseConv2d): [[1L, 7L, 7L, 1024L]]
conv_pw_13_relu_2/Relu6 (Conv2D): [[1L, 7L, 7L, 1024L]]
max_pooling2d_1_1/MaxPool (Pooling): [[1L, 1L, 1L, 1024L]]
dense_1_2/MatMul (FullyConnected): [[1L, 2L]]
lambda_1_1/Softmax (Softmax): [[1L, 1L, 1L, 2L]]

Additional context

In case of MobilenetV1 and MobilenetV2 conversion, there is no difference between using MACE v0.10.0 and v0.11.0-rc0

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 15 (2 by maintainers)

Most upvoted comments

@mexeniz , I have identified this problem with @liyinhgqw , which is an bug of the latest release, we will fix it in the next few days, thanks for your feedback.

@mexeniz Thanks, I got it, please give me some time to resolve it.