onnx2keras: KeyError: 'min'

KeyError                                  Traceback (most recent call last)
<ipython-input-22-6be9743dbc2b> in <module>
      1 from onnx2keras import onnx_to_keras
      2 model=onnx.load("optimized_mobile_pydnet.onnx")
----> 3 k_model = onnx_to_keras(onnx_model=model, input_names=['input'])

~/anaconda3/envs/e2r/lib/python3.7/site-packages/onnx2keras/converter.py in onnx_to_keras(onnx_model, input_names, input_shapes, name_policy, verbose, change_ordering)
    179             lambda_funcs,
    180             node_name,
--> 181             keras_names
    182         )
    183         if isinstance(keras_names, list):

~/anaconda3/envs/e2r/lib/python3.7/site-packages/onnx2keras/operation_layers.py in convert_clip(node, params, layers, lambda_func, node_name, keras_name)
     29     input_0 = ensure_tf_type(layers[node.input[0]], name="%s_const" % keras_name)
     30 
---> 31     if params['min'] == 0:
     32         logger.debug("Using ReLU({0}) instead of clip".format(params['max']))
     33         layer = keras.layers.ReLU(max_value=params['max'], name=keras_name)

KeyError: 'min'```

About this issue

Most upvoted comments

I have come here to rescue you guys. This bug is because of conflict among version of onnx and the onnx torch use to export. By inspecting the file in onnx, you guys con fine the key here is not match with the newest onnx ops converter. The correct dict now is here:

AVAILABLE_CONVERTERS = {
    'Conv': convert_conv,
    'ConvTranspose': convert_convtranspose,
    'Relu': convert_relu,
    'Elu': convert_elu,
    'LeakyRelu': convert_lrelu,
    'Sigmoid': convert_sigmoid,
    'Tanh': convert_tanh,
    'Selu': convert_selu,
    'Clip': convert_clip,
    'Exp': convert_exp,
    'Log': convert_log,
    'Softmax': convert_softmax,
    'PRelu': convert_prelu,
    'ReduceMax': convert_reduce_max,
    'ReduceSum': convert_reduce_sum,
    'ReduceMean': convert_reduce_mean,
    'Pow': convert_pow,
    'Slice': convert_slice,
    'Squeeze': convert_squeeze,
    'Expand': convert_expand,
    'Sqrt': convert_sqrt,
    'Split': convert_split,
    'Cast': convert_cast,
    'Floor': convert_floor,
    'Identity': convert_identity,
    'ArgMax': convert_argmax,
    'ReduceL2': convert_reduce_l2,
    'Max': convert_max,
    'Min': convert_min,
    'Mean': convert_mean,
    'Div': convert_elementwise_div,
    'Add': convert_elementwise_add,
    'Sum': convert_elementwise_add,
    'Mul': convert_elementwise_mul,
    'Sub': convert_elementwise_sub,
    'Gemm': convert_gemm,
    'MatMul': convert_gemm,
    'Transpose': convert_transpose,
    'Constant': convert_constant,
    'BatchNormalization': convert_batchnorm,
    'InstanceNormalization': convert_instancenorm,
    'Dropout': convert_dropout,
    'LRN': convert_lrn,
    'MaxPool': convert_maxpool,
    'AveragePool': convert_avgpool,
    'GlobalAveragePool': convert_global_avg_pool,
    'Shape': convert_shape,
    'Gather': convert_gather,
    'Unsqueeze': convert_unsqueeze,
    'Concat': convert_concat,
    'Reshape': convert_reshape,
    'Pad': convert_padding,
    'Flatten': convert_flatten,
    'Upsample': convert_upsample,
}

Therefore, you can edit the line in here as the correct node_type in the dict (ie when it returns to min/Resize as Min/Upsample). This can easily be done by editing the source file of onnx2keras. It may also ask you to change the node_params as the Upsample ask for size param as scales. For detail, pls look in here

I was able to solve the problem. The convert_clip() expects an key min inside the params. But current ONNX, the min and max value are passed as inputs (not as attribute).

So, we can add the params["min"] and params["max"] before they are called.

  1. Open operation_layers.py file. (may be located at .../envs/.../lib/python3.9/site-packages/onnx2keras/operation_layers.py. or use VS Code navigator)
  2. In the convert_clip() method, add these following lines at the beginning of the convert_clip() method
def convert_clip(node, params, layers, lambda_func, node_name, keras_name):
    if len(node.input) == 3:
        params["min"] = ensure_numpy_type(layers[node.input[1]]).astype(int)
        params["max"] = ensure_numpy_type(layers[node.input[2]]).astype(int)
    else:
        # you can raise Exception here to make sure the above assignments are happening always.
        pass

For the clip operator, it seems that it support for onnx operator set <=6, where min and max are at the attribute. However, for onnx operator set >= 11, min and max are at inputs, which cause the above KeyError: ‘min’

Also KeyError: 'Resize'

For people with the resize problem, changing this line

from scale = np.uint8(layers[node.input[1]][-2:]) to scale = np.uint8(layers[node.input[-1]][-2:])

solved it for me.

Generally speaking how to maybe solve this problem for other operators: Visualise your onnx model in Netron to get the node number of the params that the layer needs and then debug with pycharm or something else into the code to see how you can use this information. Not an expert of onnx, but apparently parameters for certain layers are also stored as nodes, like in the resize case for the parameters how much the layer is supposed to upscale.