Interpolate (or extrapolate) only small gaps in pandas dataframe
According to the interpolate
documentation limit_area
as used below is new in version 0.23.0. I'm not sure, if this is the desired output for columns e and g, as you haven't specified the desired output in detail.
import numpy as np
import pandas as pd
from datetime import datetime
from datetime import timedelta
start = datetime(2014,2,21,14,50)
df = data = pd.DataFrame(index=[start + timedelta(minutes=1*x) for x in range(0, 8)],
data={'a': [123.5, np.NaN, 136.3, 164.3, 213.0, 164.3, 213.0, 221.1],
'b': [433.5, 523.2, 536.3, 464.3, 413.0, 164.3, 213.0, 221.1],
'c': [123.5, 132.3, 136.3, 164.3] + [np.NaN]*4,
'd': [np.NaN]*8,
'e': [np.NaN]*7 + [2330.3],
'f': [np.NaN]*4 + [2763.0, 2142.3, 2127.3, 2330.3],
'g': [2330.3] + [np.NaN]*7,
'h': [2330.3] + [np.NaN]*6 + [2777.7]})
df.interpolate(
limit=5,
inplace=True,
limit_direction='both',
limit_area='outside',
)
print(df)
Output:
a b c d e f g h
2014-02-21 14:50:00 123.5 433.5 123.5 NaN NaN 2763.0 2330.3 2330.3
2014-02-21 14:51:00 NaN 523.2 132.3 NaN NaN 2763.0 2330.3 NaN
2014-02-21 14:52:00 136.3 536.3 136.3 NaN 2330.3 2763.0 2330.3 NaN
2014-02-21 14:53:00 164.3 464.3 164.3 NaN 2330.3 2763.0 2330.3 NaN
2014-02-21 14:54:00 213.0 413.0 164.3 NaN 2330.3 2763.0 2330.3 NaN
2014-02-21 14:55:00 164.3 164.3 164.3 NaN 2330.3 2142.3 2330.3 NaN
2014-02-21 14:56:00 213.0 213.0 164.3 NaN 2330.3 2127.3 NaN NaN
2014-02-21 14:57:00 221.1 221.1 164.3 NaN 2330.3 2330.3 NaN 2777.7
I went ahead and adapted @JohnE's solution into a function (with some tweaks/improvements). I'm using Python 3.8, and I believe type hinting changed for 3.9, so you may have to adapt.
from typing import Union
def fill_with_hard_limit(
df_or_series: Union[pd.DataFrame, pd.Series], limit: int,
fill_method='interpolate',
**fill_method_kwargs) -> Union[pd.DataFrame, pd.Series]:
"""The fill methods from Pandas such as ``interpolate`` or ``bfill``
will fill ``limit`` number of NaNs, even if the total number of
consecutive NaNs is larger than ``limit``. This function instead
does not fill any data when the number of consecutive NaNs
is > ``limit``.
Adapted from: https://stackoverflow.com/a/30538371/11052174
:param df_or_series: DataFrame or Series to perform interpolation
on.
:param limit: Maximum number of consecutive NaNs to allow. Any
occurrences of more consecutive NaNs than ``limit`` will have no
filling performed.
:param fill_method: Filling method to use, e.g. 'interpolate',
'bfill', etc.
:param fill_method_kwargs: Keyword arguments to pass to the
fill_method, in addition to the given limit.
:returns: A filled version of the given df_or_series according
to the given inputs.
"""
# Keep things simple, ensure we have a DataFrame.
try:
df = df_or_series.to_frame()
except AttributeError:
df = df_or_series
# Initialize our mask.
mask = pd.DataFrame(True, index=df.index, columns=df.columns)
# Get cumulative sums of consecutive NaNs.
grp = (df.notnull() != df.shift().notnull()).cumsum()
# Add columns of ones.
grp['ones'] = 1
# Loop through columns and update the mask.
for col in df.columns:
mask.loc[:, col] = (
(grp.groupby(col)['ones'].transform('count') <= limit)
| df[col].notnull()
)
# Now, interpolate and use the mask to create NaNs for the larger
# gaps.
method = getattr(df, fill_method)
out = method(limit=limit, **fill_method_kwargs)[mask]
# Be nice to the caller and return a Series if that's what they
# provided.
if isinstance(df_or_series, pd.Series):
# Return a Series.
return out.loc[:, out.columns[0]]
return out
Usage:
>>> data_filled = fill_with_hard_limit(data, 5)
>>> data_filled
a b c d e f g h
2014-02-21 14:50:00 123.5 433.5 123.5 NaN NaN NaN 2330.3 2330.3
2014-02-21 14:51:00 129.9 523.2 132.3 NaN NaN NaN NaN NaN
2014-02-21 14:52:00 136.3 536.3 136.3 NaN NaN NaN NaN NaN
2014-02-21 14:53:00 164.3 464.3 164.3 NaN NaN NaN NaN NaN
2014-02-21 14:54:00 213.0 413.0 164.3 NaN NaN 2763.0 NaN NaN
2014-02-21 14:55:00 164.3 164.3 164.3 NaN NaN 2142.3 NaN NaN
2014-02-21 14:56:00 213.0 213.0 164.3 NaN NaN 2127.3 NaN NaN
2014-02-21 14:57:00 221.1 221.1 164.3 NaN 2330.3 2330.3 NaN 2777.7
I had to solve a similar problem and came up with a numpy
based solution before I found the answer above. Since my code is approx. ten times faster, I provide it here for it to be useful for somebody in the future. It handles NaNs at the end of the series differently than the solution of JohnE above. If a series ends with NaNs it flags this last gap as invalid.
Here is the code:
def bfill_nan(arr):
""" Backward-fill NaNs """
mask = np.isnan(arr)
idx = np.where(~mask, np.arange(mask.shape[0]), mask.shape[0]-1)
idx = np.minimum.accumulate(idx[::-1], axis=0)[::-1]
out = arr[idx]
return out
def calc_mask(arr, maxgap):
""" Mask NaN gaps longer than `maxgap` """
isnan = np.isnan(arr)
cumsum = np.cumsum(isnan).astype('float')
diff = np.zeros_like(arr)
diff[~isnan] = np.diff(cumsum[~isnan], prepend=0)
diff[isnan] = np.nan
diff = bfill_nan(diff)
return (diff < maxgap) | ~isnan
mask = data.copy()
for column_name in data:
x = data[column_name].values
mask[column_name] = calc_mask(x, 5)
print('data:')
print(data)
print('\nmask:')
print mask
Output:
data:
a b c d e f g h
2014-02-21 14:50:00 123.5 433.5 123.5 NaN NaN NaN 2330.3 2330.3
2014-02-21 14:51:00 NaN 523.2 132.3 NaN NaN NaN NaN NaN
2014-02-21 14:52:00 136.3 536.3 136.3 NaN NaN NaN NaN NaN
2014-02-21 14:53:00 164.3 464.3 164.3 NaN NaN NaN NaN NaN
2014-02-21 14:54:00 213.0 413.0 NaN NaN NaN 2763.0 NaN NaN
2014-02-21 14:55:00 164.3 164.3 NaN NaN NaN 2142.3 NaN NaN
2014-02-21 14:56:00 213.0 213.0 NaN NaN NaN 2127.3 NaN NaN
2014-02-21 14:57:00 221.1 221.1 NaN NaN 2330.3 2330.3 NaN 2777.7
mask:
a b c d e f g h
2014-02-21 14:50:00 True True True False False True True True
2014-02-21 14:51:00 True True True False False True False False
2014-02-21 14:52:00 True True True False False True False False
2014-02-21 14:53:00 True True True False False True False False
2014-02-21 14:54:00 True True False False False True False False
2014-02-21 14:55:00 True True False False False True False False
2014-02-21 14:56:00 True True False False False True False False
2014-02-21 14:57:00 True True False False True True False True
So here is a mask that ought to solve the problem. Just interpolate
and then apply the mask to reset appropriate values to NaN. Honestly, this was a bit more work than I realized it would be because I had to loop through each column but then groupby didn't quite work without me providing some dummy columns like 'ones'.
Anyway, I can explain if anything is unclear but really only a couple of the lines are somewhat hard to understand. See here for a little bit more of an explanation of the trick on the df['new']
line or just print out individual lines to better see what is going on.
mask = data.copy()
for i in list('abcdefgh'):
df = pd.DataFrame( data[i] )
df['new'] = ((df.notnull() != df.shift().notnull()).cumsum())
df['ones'] = 1
mask[i] = (df.groupby('new')['ones'].transform('count') < 5) | data[i].notnull()
In [7]: data
Out[7]:
a b c d e f g h
2014-02-21 14:50:00 123.5 433.5 123.5 NaN NaN NaN 2330.3 2330.3
2014-02-21 14:51:00 NaN 523.2 132.3 NaN NaN NaN NaN NaN
2014-02-21 14:52:00 136.3 536.3 136.3 NaN NaN NaN NaN NaN
2014-02-21 14:53:00 164.3 464.3 164.3 NaN NaN NaN NaN NaN
2014-02-21 14:54:00 213.0 413.0 NaN NaN NaN 2763.0 NaN NaN
2014-02-21 14:55:00 164.3 164.3 NaN NaN NaN 2142.3 NaN NaN
2014-02-21 14:56:00 213.0 213.0 NaN NaN NaN 2127.3 NaN NaN
2014-02-21 14:57:00 221.1 221.1 NaN NaN 2330.3 2330.3 NaN 2777.7
In [8]: mask
Out[8]:
a b c d e f g h
2014-02-21 14:50:00 True True True False False True True True
2014-02-21 14:51:00 True True True False False True False False
2014-02-21 14:52:00 True True True False False True False False
2014-02-21 14:53:00 True True True False False True False False
2014-02-21 14:54:00 True True True False False True False False
2014-02-21 14:55:00 True True True False False True False False
2014-02-21 14:56:00 True True True False False True False False
2014-02-21 14:57:00 True True True False True True False True
It's easy from there if you don't do anything fancier with respect to extrapolation:
In [9]: data.interpolate().bfill()[mask]
Out[9]:
a b c d e f g h
2014-02-21 14:50:00 123.5 433.5 123.5 NaN NaN 2763.0 2330.3 2330.3
2014-02-21 14:51:00 129.9 523.2 132.3 NaN NaN 2763.0 NaN NaN
2014-02-21 14:52:00 136.3 536.3 136.3 NaN NaN 2763.0 NaN NaN
2014-02-21 14:53:00 164.3 464.3 164.3 NaN NaN 2763.0 NaN NaN
2014-02-21 14:54:00 213.0 413.0 164.3 NaN NaN 2763.0 NaN NaN
2014-02-21 14:55:00 164.3 164.3 164.3 NaN NaN 2142.3 NaN NaN
2014-02-21 14:56:00 213.0 213.0 164.3 NaN NaN 2127.3 NaN NaN
2014-02-21 14:57:00 221.1 221.1 164.3 NaN 2330.3 2330.3 NaN 2777.7
Edit to add: Here's a faster (about 2x on this sample data) and slightly simpler way, by moving some stuff outside of the loop:
mask = data.copy()
grp = ((mask.notnull() != mask.shift().notnull()).cumsum())
grp['ones'] = 1
for i in list('abcdefgh'):
mask[i] = (grp.groupby(i)['ones'].transform('count') < 5) | data[i].notnull()