tensorflow: [Lite] tf.strided_slice sometimes computes wrong indices
System information
- Have I written custom code: yes
- OS Platform and Distribution: Ubuntu 16.04.2 LTS
- TensorFlow installed from: source
- TensorFlow version: v1.8.0-1520-g1f03f82 1.8.0
- Python version: 3.5.2
- Bazel version: 0.13.0
Problem
I tried using the TOCO tool on a graph that contains a strided_slice
op.
The code determining the fixed size of this op, fails at an assertion and throws an error (see below).
Logs
2018-05-14 00:42:56.816500: F tensorflow/contrib/lite/toco/graph_transformations/propagate_fixed_sizes.cc:1305] Check failed: dim_size > 0 (-1 vs. 0)Output size for an axis must be greater than 0. Axis 0 computes to size -1 for StridedSlice op with output “stft/frame/strided_slice”.
Minimum Reproducible Example
Source files
(set TF_ROOT
in freeze
and toco
)
./mre.py
./freeze
./toco
This produces a directory “model” with the graph, weights and frozen graph. The offending error is thrown by last step.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 31 (27 by maintainers)
Commits related to this issue
- Ignore stop indices when shrink_axis_mask is set in tf.lite StridedSlice implementation. Due to an issue with negative StridedSlice indices in TensorFlow, the end indices can specify degenerate slice... — committed to case540/tensorflow by rryan 6 years ago
- Ignore stop indices when shrink_axis_mask is set in tf.lite StridedSlice implementation. Due to an issue with negative StridedSlice indices in TensorFlow, the end indices can specify degenerate slice... — committed to tensorflow/tensorflow by rryan 6 years ago
Hm, you always have to compute both the real and imaginary parts of the DFT in order to get the magnitude so I think even if there are no complex-valued tensors the op would be calculating it internally. You can factor the calculation into a real and imaginary part to avoid needing a complex type in tf.lite itself, but this will probably be less efficient than computing them jointly because the real and imaginary parts share the same memory access patterns when they’re being computed.
One thing that might help is doing a fixed-point FFT instead of floating point. I haven’t thought much about how to support that with TensorFlow’s
RFFT
op, but it should be do-able.If you’re building a mobile algorithm that operates in a streaming fashion you probably want to pass in frames of audio at a time to your tf.lite model. In this situation,
tf.contrib.signal.stft
isn’t going to be appropriate because it’s going to frame the audio you pass in for you, and it doesn’t support being run in a stateful manner where you’re feeding it chunks of audio at a time. You may want to fall back ontf.contrib.signal.hann_window
(or any window) andtf.spectral.rfft
to window and compute the RFFT on the incoming frame you’re processing in this scenario.@mjmatthews
Yea, the
end
is juststart + 1
in this case, so it makes sense the interval is not length 1. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/array_ops.py#L488Which violates the requirements of shrink_axis_mask:
There is a workaround implemented in C++ here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/util/strided_slice_op.cc#L290-L300
It’s unfortunate, but tf.lite should probably follow suit, right?
/cc: @aselle
It confused me too but I think it’s ok:
tf.stack(x, axis=0)
->[len(x), d0, ..., dn]
tf.stack(x, axis=1)
->[d0, len(x), ..., dn]
tf.stack(x, axis=rank(x))
->[d0, ..., dn, len(x)]
tf.stack(x, axis=-1)
->[d0, ..., dn, len(x)]
So for
axis=-1
, the code computesaxis = rank(shape) + axis + 1
->rank(shape) + -1 + 1
->rank(shape)
, which is the same as thetf.stack(x, axis=rank(x))
case.