How to do Pearson correlation of selected columns of a Pandas data frame

pd.DataFrame.corrwith() can be used instead of df.corr().

pass in the intended column for which we want correlation with the rest of the columns.

For specific example above the code will be: df.corrwith(df['special_col'])

or simply df.corr()['special_col'] to create entire correlation of each column with other columns and subset what you need.


Note there is a mistake in your data, there special col is all 3, so no correlation can be computed.

If you remove the column selection in the end you'll get a correlation matrix of all other columns you are analysing. The last [:-1] is to remove correlation of 'special_col' with itself.

In [15]: data[data.columns[1:]].corr()['special_col'][:-1]
Out[15]: 
stem1    0.500000
stem2   -0.500000
stem3   -0.999945
b1       0.500000
b2       0.500000
b3      -0.500000
Name: special_col, dtype: float64

If you are interested in speed, this is slightly faster on my machine:

In [33]: np.corrcoef(data[data.columns[1:]].T)[-1][:-1]
Out[33]: 
array([ 0.5       , -0.5       , -0.99994535,  0.5       ,  0.5       ,
       -0.5       ])

In [34]: %timeit np.corrcoef(data[data.columns[1:]].T)[-1][:-1]
1000 loops, best of 3: 437 µs per loop

In [35]: %timeit data[data.columns[1:]].corr()['special_col']
1000 loops, best of 3: 526 µs per loop

But obviously, it returns an array, not a pandas series/DF.


Why not just do:

In [34]: df.corr().iloc[:-1,-1]
Out[34]:
stem1    0.500000
stem2   -0.500000
stem3   -0.999945
b1       0.500000
b2       0.500000
b3      -0.500000
Name: special_col, dtype: float64

or:

In [39]: df.corr().ix['special_col', :-1]
Out[39]:
stem1    0.500000
stem2   -0.500000
stem3   -0.999945
b1       0.500000
b2       0.500000
b3      -0.500000
Name: special_col, dtype: float64

Timings

In [35]: %timeit df.corr().iloc[-1,:-1]
1000 loops, best of 3: 576 us per loop

In [40]: %timeit df.corr().ix['special_col', :-1]
1000 loops, best of 3: 634 us per loop

In [36]: %timeit df[df.columns[1:]].corr()['special_col']
1000 loops, best of 3: 968 us per loop

In [37]: %timeit df[df.columns[1:-1]].apply(lambda x: x.corr(df['special_col']))
100 loops, best of 3: 2.12 ms per loop

You can apply on your column range with a lambda that calls corr and pass the Series 'special_col':

In [126]:
df[df.columns[1:-1]].apply(lambda x: x.corr(df['special_col']))

Out[126]:
stem1    0.500000
stem2   -0.500000
stem3   -0.999945
b1       0.500000
b2       0.500000
b3      -0.500000
dtype: float64

Timings

Actually the other method is quicker so I expect it to scale better:

In [130]:
%timeit df[df.columns[1:-1]].apply(lambda x: x.corr(df['special_col']))
%timeit df[df.columns[1:]].corr()['special_col']

1000 loops, best of 3: 1.75 ms per loop
1000 loops, best of 3: 836 µs per loop

Tags:

Python

Pandas