Multiple sets of duplicate records from a pandas dataframe
Performing a groupby
on df.index
can take you places.
v = df.index.to_series().groupby(df.flight_id).apply(pd.Series.tolist)
v[v.str.len().gt(1)]
flight_id
4 [2, 5]
9 [3, 6]
dtype: object
You can also get cute with just groupby
on df.index
directly.
v = pd.Series(df.index.groupby(df.flight_id))
v[v.str.len().gt(1)].to_dict()
{
"4": [
2,
5
],
"9": [
3,
6
]
}
Is this what you need ? duplicated
+groupby
(df.loc[df['flight_id'].duplicated(keep=False)].reset_index()).groupby('flight_id')['index'].apply(tuple)
Out[510]:
flight_id
4 (2, 5)
9 (3, 6)
Name: index, dtype: object
Adding tolist
at the end
(df.loc[df['flight_id'].duplicated(keep=False)].reset_index()).groupby('flight_id')['index'].apply(tuple).tolist()
Out[511]: [(2, 5), (3, 6)]
And another solution ... for fun only
s=df['flight_id'].value_counts()
list(map(lambda x : tuple(df[df['flight_id']==x].index.tolist()), s[s.gt(1)].index))
Out[519]: [(2, 5), (3, 6)]
Using apply
and a lambda
df.groupby('flight_id').apply(
lambda d: tuple(d.index) if len(d.index) > 1 else None
).dropna()
flight_id
4 (2, 5)
9 (3, 6)
dtype: object
Or better with an iteration through the groupby
object
{k: tuple(d.index) for k, d in df.groupby('flight_id') if len(d) > 1}
{4: (2, 5), 9: (3, 6)}
Just the tuples
[tuple(d.index) for k, d in df.groupby('flight_id') if len(d) > 1]
[(2, 5), (3, 6)]
Leaving this for posterity
But I now highly dislike this approach. It's just too gross.
I was messing around with itertools.groupby
Others may find this fun
from itertools import groupby
key = df.flight_id.get
s = sorted(df.index, key=key)
dict(filter(
lambda t: len(t[1]) > 1,
((k, tuple(g)) for k, g in groupby(s, key))
))
{4: (2, 5), 9: (3, 6)}