Using sklearn voting ensemble with partial fit
It's not too hard to implement the voting. Here's my implementation:
import numpy as np
class VotingClassifier(object):
""" Implements a voting classifier for pre-trained classifiers"""
def __init__(self, estimators):
self.estimators = estimators
def predict(self, X):
# get values
Y = np.zeros([X.shape[0], len(self.estimators)], dtype=int)
for i, clf in enumerate(self.estimators):
Y[:, i] = clf.predict(X)
# apply voting
y = np.zeros(X.shape[0])
for i in range(X.shape[0]):
y[i] = np.argmax(np.bincount(Y[i,:]))
return y
The Mlxtend library has an implementation of VotingEnsemble which allows you to pass in pre-fitted models. For example if you have three pre-trained models clf1, clf2, clf3. The following code would work.
from mlxtend.classifier import EnsembleVoteClassifier
import copy
eclf = EnsembleVoteClassifier(clfs=[clf1, clf2, clf3], weights=[1,1,1], fit_base_estimators=False)
When set to false the fit_base_estimators argument in EnsembleVoteClassifier ensures that the classifiers are not refit.
In general, when looking for more advanced technical features that sci-kit learn does not provide, look to mlxtend as a first point of reference.
Unfortunately, currently this is not possible in scikit VotingClassifier.
But you can use http://sebastianraschka.com/Articles/2014_ensemble_classifier.html (from which VotingClassifer is implemented) to try and implement your own voting classifier which can take pre-fitted models.
Also we can look at the source code here and modify it to our use:
from sklearn.preprocessing import LabelEncoder
import numpy as np
le_ = LabelEncoder()
# When you do partial_fit, the first fit of any classifier requires
all available labels (output classes),
you should supply all same labels here in y.
le_.fit(y)
# Fill below list with fitted or partial fitted estimators
clf_list = [clf1, clf2, clf3, ... ]
# Fill weights -> array-like, shape = [n_classifiers] or None
weights = [clf1_wgt, clf2_wgt, ... ]
weights = None
#For hard voting:
pred = np.asarray([clf.predict(X) for clf in clf_list]).T
pred = np.apply_along_axis(lambda x:
np.argmax(np.bincount(x, weights=weights)),
axis=1,
arr=pred.astype('int'))
#For soft voting:
pred = np.asarray([clf.predict_proba(X) for clf in clf_list])
pred = np.average(pred, axis=0, weights=weights)
pred = np.argmax(pred, axis=1)
#Finally, reverse transform the labels for correct output:
pred = le_.inverse_transform(np.argmax(pred, axis=1))
Workaround:
VotingClassifier checks that estimators_ is set in order to understand whether it is fitted, and is using the estimators in estimators_ list for prediction. If you have pre trained classifiers, you can put them in estimators_ directly like the code below.
However, it is also using LabelEnconder, so it assumes labels are like 0,1,2,... and you also need to set le_ and classes_ (see below).
from sklearn.ensemble import VotingClassifier
from sklearn.preprocessing import LabelEncoder
clf_list = [clf1, clf2, clf3]
eclf = VotingClassifier(estimators = [('1' ,clf1), ('2', clf2), ('3', clf3)], voting='soft')
eclf.estimators_ = clf_list
eclf.le_ = LabelEncoder().fit(y)
eclf.classes_ = seclf.le_.classes_
# Now it will work without calling fit
eclf.predict(X,y)