How to use warm_start
The basic pattern of (taken from Miriam's answer):
clf = RandomForestClassifier(warm_start=True)
clf.fit(get_data())
clf.fit(get_more_data())
would be the correct usage API-wise.
But there is an issue here.
As the docs say the following:
When set to True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest.
it means, that the only thing warm_start
can do for you, is adding new DecisionTree's. All the previous trees seem to be untouched!
Let's check this with some sources:
n_more_estimators = self.n_estimators - len(self.estimators_)
if n_more_estimators < 0:
raise ValueError('n_estimators=%d must be larger or equal to '
'len(estimators_)=%d when warm_start==True'
% (self.n_estimators, len(self.estimators_)))
elif n_more_estimators == 0:
warn("Warm-start fitting without increasing n_estimators does not "
"fit new trees.")
This basically tells us, that you would need to increase the number of estimators before approaching a new fit!
I have no idea what kind of usage sklearn expects here. I'm not sure, if fitting, increasing internal variables and fitting again is correct usage, but i somehow doubt it (especially as n_estimators
is not a public class-variable).
Your basic approach (in regards to this library and this classifier) is probably not a good idea for your out-of-core learning here! I would not pursue this further.
Just to add to excellent @sascha`s answer, this hackie method works:
rf = RandomForestClassifier(n_estimators=1, warm_start=True)
rf.fit(X_train, y_train)
rf.n_estimators += 1
rf.fit(X_train, y_train)