Counting the number of non-NaN elements in a numpy ndarray in Python
Quick-to-write alterantive
Even though is not the fastest choice, if performance is not an issue you can use:
sum(~np.isnan(data))
.
Performance:
In [7]: %timeit data.size - np.count_nonzero(np.isnan(data))
10 loops, best of 3: 67.5 ms per loop
In [8]: %timeit sum(~np.isnan(data))
10 loops, best of 3: 154 ms per loop
In [9]: %timeit np.sum(~np.isnan(data))
10 loops, best of 3: 140 ms per loop
np.count_nonzero(~np.isnan(data))
~
inverts the boolean matrix returned from np.isnan
.
np.count_nonzero
counts values that is not 0\false. .sum
should give the same result. But maybe more clearly to use count_nonzero
Testing speed:
In [23]: data = np.random.random((10000,10000))
In [24]: data[[np.random.random_integers(0,10000, 100)],:][:, [np.random.random_integers(0,99, 100)]] = np.nan
In [25]: %timeit data.size - np.count_nonzero(np.isnan(data))
1 loops, best of 3: 309 ms per loop
In [26]: %timeit np.count_nonzero(~np.isnan(data))
1 loops, best of 3: 345 ms per loop
In [27]: %timeit data.size - np.isnan(data).sum()
1 loops, best of 3: 339 ms per loop
data.size - np.count_nonzero(np.isnan(data))
seems to barely be the fastest here. other data might give different relative speed results.