Improve min/max downsampling
I managed to get an improved performance by using the output of arg(min|max)
directly to index the data arrays. This comes at the cost of an extra call to np.sort
but the axis to be sorted has only two elements (the min. / max. indices) and the overall array is rather small (number of bins):
def min_max_downsample_v3(x, y, num_bins):
pts_per_bin = x.size // num_bins
x_view = x[:pts_per_bin*num_bins].reshape(num_bins, pts_per_bin)
y_view = y[:pts_per_bin*num_bins].reshape(num_bins, pts_per_bin)
i_min = np.argmin(y_view, axis=1)
i_max = np.argmax(y_view, axis=1)
r_index = np.repeat(np.arange(num_bins), 2)
c_index = np.sort(np.stack((i_min, i_max), axis=1)).ravel()
return x_view[r_index, c_index], y_view[r_index, c_index]
I checked the timings for your example and I obtained:
min_max_downsample_v1
: 110 ms ± 5 msmin_max_downsample_v2
: 240 ms ± 8.01 msmin_max_downsample_v3
: 164 ms ± 1.23 ms
I also checked returning directly after the calls to arg(min|max)
and the result was equally 164 ms, i.e. there's no real overhead after that anymore.
So this doesn't address speeding up the specific function in question, but it does show a few ways of plotting a line with a large number of points somewhat effectively. This assumes that the x points are ordered and uniformly (or close to uniformly) sampled.
Setup
from pylab import *
Here's a function I like that reduces the number of points by randomly choosing one in each interval. It isn't guaranteed to show every peak in the data, but it doesn't have as many problems as directly decimating the data, and is fast.
def calc_rand(y, factor):
split = y[:len(y)//factor*factor].reshape(-1, factor)
idx = randint(0, split.shape[-1], split.shape[0])
return split[arange(split.shape[0]), idx]
And here's the min and max to see the signal envelope
def calc_env(y, factor):
"""
y : 1D signal
factor : amount to reduce y by (actually returns twice this for min and max)
Calculate envelope (interleaved min and max points) for y
"""
split = y[:len(y)//factor*factor].reshape(-1, factor)
upper = split.max(axis=-1)
lower = split.min(axis=-1)
return c_[upper, lower].flatten()
The following function can take either of these, and uses them to reduce the data being drawn. The number of points actually taken is 5000 by default, which should far exceed a monitor's resolution. Data is cached after it's reduced. Memory may be an issue, especially with large amounts of data, but it shouldn't exceed the amount required by the original signal.
def plot_bigly(x, y, *, ax=None, M=5000, red=calc_env, **kwargs):
"""
x : the x data
y : the y data
ax : axis to plot on
M : The maximum number of line points to display at any given time
kwargs : passed to line
"""
assert x.shape == y.shape, "x and y data must have same shape!"
if ax is None:
ax = gca()
cached = {}
# Setup line to be drawn beforehand, note this doesn't increment line properties so
# style needs to be passed in explicitly
line = plt.Line2D([],[], **kwargs)
def update(xmin, xmax):
"""
Update line data
precomputes and caches entire line at each level, so initial
display may be slow but panning and zooming should speed up after that
"""
# Find nearest power of two as a factor to downsample by
imin = max(np.searchsorted(x, xmin)-1, 0)
imax = min(np.searchsorted(x, xmax) + 1, y.shape[0])
L = imax - imin + 1
factor = max(2**int(round(np.log(L/M) / np.log(2))), 1)
# only calculate reduction if it hasn't been cached, do reduction using nearest cached version if possible
if factor not in cached:
cached[factor] = red(y, factor=factor)
## Make sure lengths match correctly here, by ensuring at least
# "factor" points for each x point, then matching y length
# this assumes x has uniform sample spacing - but could be modified
newx = x[imin:imin + ((imax-imin)//factor)* factor:factor]
start = imin//factor
newy = cached[factor][start:start + newx.shape[-1]]
assert newx.shape == newy.shape, "decimation error {}/{}!".format(newx.shape, newy.shape)
## Update line data
line.set_xdata(newx)
line.set_ydata(newy)
update(x[0], x[-1])
ax.add_line(line)
## Manually update limits of axis, as adding line doesn't do this
# if drawing multiple lines this can quickly slow things down, and some
# sort of check should be included to prevent unnecessary changes in limits
# when a line is first drawn.
ax.set_xlim(min(ax.get_xlim()[0], x[0]), max(ax.get_xlim()[1], x[1]))
ax.set_ylim(min(ax.get_ylim()[0], np.min(y)), max(ax.get_ylim()[1], np.max(y)))
def callback(*ignore):
lims = ax.get_xlim()
update(*lims)
ax.callbacks.connect('xlim_changed', callback)
return [line]
Here's some test code
L=int(100e6)
x=linspace(0,1,L)
y=0.1*randn(L)+sin(2*pi*18*x)
plot_bigly(x,y, red=calc_env)
On my machine this displays very quickly. Zooming has a bit of lag, especially when it's by a large amount. Panning has no issues. Using random selection instead of the min and max is quite a bit faster, and only has issues on very high levels of zoom.
EDIT: Added parallel=True to numba ... even faster
I ended up making a hybrid of a single pass argmin+max routine and the improved indexing from @a_guest's answer and link to this related simultaneous min max question.
This version returns the correct x-values for each min/max y pair and thanks to numba
is a actually a little faster than the "fast but not quite correct" version.
from numba import jit, prange
@jit(parallel=True)
def min_max_downsample_v4(x, y, num_bins):
pts_per_bin = x.size // num_bins
x_view = x[:pts_per_bin*num_bins].reshape(num_bins, pts_per_bin)
y_view = y[:pts_per_bin*num_bins].reshape(num_bins, pts_per_bin)
i_min = np.zeros(num_bins,dtype='int64')
i_max = np.zeros(num_bins,dtype='int64')
for r in prange(num_bins):
min_val = y_view[r,0]
max_val = y_view[r,0]
for c in range(pts_per_bin):
if y_view[r,c] < min_val:
min_val = y_view[r,c]
i_min[r] = c
elif y_view[r,c] > max_val:
max_val = y_view[r,c]
i_max[r] = c
r_index = np.repeat(np.arange(num_bins), 2)
c_index = np.sort(np.stack((i_min, i_max), axis=1)).ravel()
return x_view[r_index, c_index], y_view[r_index, c_index]
Comparing the speeds using timeit
shows the numba
code is roughly 2.6x faster and providing better results that v1. It is a little over 10x faster than doing numpy's argmin & argmax in series.
%timeit min_max_downsample_v1(x_big ,y_big ,2000)
96 ms ± 2.46 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit min_max_downsample_v2(x_big ,y_big ,2000)
507 ms ± 4.75 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit min_max_downsample_v3(x_big ,y_big ,2000)
365 ms ± 1.27 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit min_max_downsample_v4(x_big ,y_big ,2000)
36.2 ms ± 487 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)