plyr or dplyr in Python

The most similar way to use dplyr in python, is with the dfply package. Here is an example.

R dplyr

library(nycflights13)
library(dplyr)

flights %>%
  filter(hour > 10) %>% # step 1
  mutate(speed = distance / (air_time * 60)) %>% # step 2
  group_by(origin) %>% # step 3a
  summarize(mean_speed = sprintf("%0.6f",mean(speed, na.rm = T))) %>% # step 3b
  arrange(desc(mean_speed)) # step 4

# A tibble: 3 x 2
  origin mean_speed
  <chr>  <chr>     
1 EWR    0.109777  
2 JFK    0.109427  
3 LGA    0.107362 

Python dfply

from dfply import *
import pandas as pd

flight_data = pd.read_csv('nycflights13.csv')

(flight_data >>
  mask(X.hour > 10) >> # step 1
  mutate(speed = X.distance / (X.air_time * 60)) >> # step 2
  group_by(X.origin) >> # step 3a
  summarize(mean_speed = X.speed.mean()) >> # step 3b
  arrange(X.mean_speed, ascending=False) # step 4
)


Out[1]: 
  origin  mean_speed
0    EWR    0.109777
1    JFK    0.109427
2    LGA    0.107362

Python Pandas

flight_data.loc[flight_data['hour'] > 10, 'speed'] = flight_data['distance'] / (flight_data['air_time'] * 60)
result = flight_data.groupby('origin', as_index=False)['speed'].mean()
result.sort_values('speed', ascending=False)

Out[2]: 
  origin     speed
0    EWR  0.109777
1    JFK  0.109427
2    LGA  0.107362

Note: For more information you can check the following link.


I think you're looking for the agg function, which is applied to groupby objects.

From the docs:

In [48]: grouped = df.groupby('A')

In [49]: grouped['C'].agg([np.sum, np.mean, np.std])
Out[49]: 
          sum      mean       std
A                                
bar  0.443469  0.147823  0.301765
foo  2.529056  0.505811  0.96

I'm also a big fan of dplyr for R and am working to improve my knowledge of Pandas. Since you don't have a specific problem, I'd suggest checking out the post below that breaks down the entire introductory dplyr vignette and shows how all of it can be done with Pandas.

For example, the author demonstrates chaining with the pipe operator in R:

 flights %>%
   group_by(year, month, day) %>%
   select(arr_delay, dep_delay) %>%
   summarise(
      arr = mean(arr_delay, na.rm = TRUE),
      dep = mean(dep_delay, na.rm = TRUE)
       ) %>%
   filter(arr > 30 | dep > 30)

And here is the Pandas implementation:

flights.groupby(['year', 'month', 'day'])
   [['arr_delay', 'dep_delay']]
   .mean()
   .query('arr_delay > 30 | dep_delay > 30')

There are many more comparisons of how to implement dplyr like operations with Pandas at the original post. http://nbviewer.ipython.org/gist/TomAugspurger/6e052140eaa5fdb6e8c0


One could simply use dplyr from Python.

There is an interface to dplyr in rpy2 (introduced with rpy2-2.7.0) that lets you write things like:

dataf = (DataFrame(mtcars).
         filter('gear>3').
         mutate(powertoweight='hp*36/wt').
         group_by('gear').
         summarize(mean_ptw='mean(powertoweight)'))

There is an example in the documentation. This part of the doc is (also) a jupyter notebook. Look for the links near the top of page.

An other answer to the question is comparing R's dplyr and pandas (see @lgallen). That same R one-liner chaining dplyr statements write's essentially the same in rpy2's interface to dplyr.

R:

flights %>%
   group_by(year, month, day) %>%
   select(arr_delay, dep_delay) %>%
   summarise(
      arr = mean(arr_delay, na.rm = TRUE),
      dep = mean(dep_delay, na.rm = TRUE)
      ) %>%
   filter(arr > 30 | dep > 30)

Python+rpy2:

(DataFrame(flights).
 group_by('year', 'month', 'day').
 select('arr_delay', 'dep_delay').
 summarize(arr = 'mean(arr_delay, na.rm=TRUE)',
           dep = 'mean(dep_delay, na.rm=TRUE)').
 filter('arr > 30 | dep > 30'))