Combining Rolling Origin Forecast Resampling and Group V-Fold Cross-Validation in rsample
As discussed in the comments of the solution from @missuse, the way to achieve this is documented in the github issue: https://github.com/tidymodels/rsample/issues/42
Essentially, the idea is to first nest over your "blocks" and then rolling_origin()
will allow you to roll over them, keeping complete blocks intact.
library(dplyr)
library(lubridate)
library(rsample)
library(tidyr)
library(tibble)
# same data generation as before
my_dates = seq(as.Date("2018/1/1"), as.Date("2018/8/20"), "days")
some_data = data_frame(dates = my_dates)
some_data$values = runif(length(my_dates))
some_data = some_data %>% mutate(month = as.factor(month(dates)))
# nest by month, then resample
rset <- some_data %>%
group_by(month) %>%
nest() %>%
rolling_origin(initial = 1)
# doesn't show which month is which :(
rset
#> # Rolling origin forecast resampling
#> # A tibble: 7 x 2
#> splits id
#> <list> <chr>
#> 1 <S3: rsplit> Slice1
#> 2 <S3: rsplit> Slice2
#> 3 <S3: rsplit> Slice3
#> 4 <S3: rsplit> Slice4
#> 5 <S3: rsplit> Slice5
#> 6 <S3: rsplit> Slice6
#> 7 <S3: rsplit> Slice7
# only January (31 days)
analysis(rset$splits[[1]])$data
#> [[1]]
#> # A tibble: 31 x 2
#> dates values
#> <date> <dbl>
#> 1 2018-01-01 0.373
#> 2 2018-01-02 0.0389
#> 3 2018-01-03 0.260
#> 4 2018-01-04 0.803
#> 5 2018-01-05 0.595
#> 6 2018-01-06 0.875
#> 7 2018-01-07 0.273
#> 8 2018-01-08 0.180
#> 9 2018-01-09 0.662
#> 10 2018-01-10 0.849
#> # ... with 21 more rows
# only February (28 days)
assessment(rset$splits[[1]])$data
#> [[1]]
#> # A tibble: 28 x 2
#> dates values
#> <date> <dbl>
#> 1 2018-02-01 0.402
#> 2 2018-02-02 0.556
#> 3 2018-02-03 0.764
#> 4 2018-02-04 0.134
#> 5 2018-02-05 0.0333
#> 6 2018-02-06 0.907
#> 7 2018-02-07 0.814
#> 8 2018-02-08 0.0973
#> 9 2018-02-09 0.353
#> 10 2018-02-10 0.407
#> # ... with 18 more rows
Created on 2018-08-28 by the reprex package (v0.2.0).
If I understand correctly you would like to create resamples where you train on all data up to a certain month and evaluate on that month for every month.
I am not a rsample
user but this can be achieved quite easy with base R. Here is one approach:
split data into a list by month
df <- split(some_data, some_data$month)
lapply along list elements defining train and test sets
df <- lapply(seq_along(df)[-length(df)], function(x){
train <- do.call(rbind, df[1:x])
test <- df[x+1]
return(list(train = train,
test = test))
})
the result df is a list of 7 elements each containing a train and test data frames.
This can also be accomplished with tidyroll
, which is a small R package with a collection of convenience functions for working with time-series data with irregular time slices.
rolling_origin_nested
is a wrapper around rolling_origin
and has a number of nice features including allowing the user to select the unit (minute, day, week, month, etc.) over which to roll, a start and end date/time, and whether or not to temporarily extend the data so that all observations between start
and end
are predicted assess
number of times.
# devtools::install_github("gacolitti/tidyroll")
library(tidyverse)
library(lubridate)
library(rsample)
library(tidyroll)
my_dates = seq(as.Date("2018/1/1"), as.Date("2018/8/20"), "days")
some_data = data.frame(dates = my_dates)
some_data$values = runif(length(my_dates))
roll <- rolling_origin_nested(some_data,
time_var = "dates",
unit = "month",
start = "2018-01-01")
roll
#> # Rolling origin forecast resampling
#> # A tibble: 7 x 2
#> splits id
#> <list> <chr>
#> 1 <split [1/1]> Slice1
#> 2 <split [2/1]> Slice2
#> 3 <split [3/1]> Slice3
#> 4 <split [4/1]> Slice4
#> 5 <split [5/1]> Slice5
#> 6 <split [6/1]> Slice6
#> 7 <split [7/1]> Slice7
analysis(roll$splits[[1]])$data[[1]] %>% tail
#> # A tibble: 6 x 2
#> dates values
#> <dttm> <dbl>
#> 1 2018-01-26 00:00:00 0.0929
#> 2 2018-01-27 00:00:00 0.536
#> 3 2018-01-28 00:00:00 0.194
#> 4 2018-01-29 00:00:00 0.600
#> 5 2018-01-30 00:00:00 0.449
#> 6 2018-01-31 00:00:00 0.754
assessment(roll$splits[[1]])$data[[1]] %>% head
#> # A tibble: 6 x 2
#> dates values
#> <dttm> <dbl>
#> 1 2018-02-01 00:00:00 0.945
#> 2 2018-02-02 00:00:00 0.733
#> 3 2018-02-03 00:00:00 0.626
#> 4 2018-02-04 00:00:00 0.585
#> 5 2018-02-05 00:00:00 0.303
#> 6 2018-02-06 00:00:00 0.767
There are a couple of other convenience functions such as fit_rsample_nested
and predict_rsample_nested
that facilitate working with objects created with rolling_origin_nested
and data preprocessing with recipes
.
One really cool feature of predict_rsample_nested
is the ability to pass additional recipe
steps to impute predictor values that might not be available depending on the prediction date.