scipy: Hilbert transforms are extremely slow on odd-length signals
Starting with input array lengths in the tens of thousands, scipy.signal.hilbert
is orders of magnitude slower when operating on odd-length arrays compared to even. The problem is especially bad when array lengths are far from powers of two. To be clear: it’s not that the Hilbert transform is much slower when lengths are not near powers of 2, it’s only slower on odd-length arrays that are not near powers of 2. Here’s an example on my desktop:
In [1]: import numpy as np
In [2]: import scipy
In [3]: np.__version__, scipy.__version__
Out[3]: ('1.10.4', '0.17.1')
In [4]: from scipy.signal import hilbert
In [5]: signal = np.random.randn(50000)
In [6]: %timeit hilbert(signal)
100 loops, best of 3: 1.95 ms per loop
In [7]: %timeit hilbert(signal[:-1])
1 loops, best of 3: 6.24 s per loop
That’s 3200 times slower on an array with one fewer element. I don’t believe this behavior is an inherent feature of computing the Hilbert Transform.
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 1
- Comments: 15 (8 by maintainers)
If the results are the same, it would make sense to zero pad automatically — and cut results to same size, unlike what the
N
parameter does. This is what eg. fftconvolve does.This solves it:
Don’t call it
hilbert2
, because it already exists for something else 😉Hi, @endolith was right. I think padding with zeros and chunk the result is the best option. At least the best I found. The FFT is only efficient if you use a signal whose length is power of 2. Hence padding with zeros unitl the next power of 2 should solve this problem.
I just code a simple function to do that and return only the chunk corresponding to the original signal:
Thanks for the discussions guys I was like three hours trying to understand why I did not have the same result as in Matlab (by result I mean speed). 😃
I agree with @pv - it’s a good idea to emulate the behavior of fftconvolve.