When should one use LinearSVC or SVC?
Mathematically, optimizing an SVM is a convex optimization problem, usually with a unique minimizer. This means that there is only one solution to this mathematical optimization problem.
The differences in results come from several aspects: SVC
and LinearSVC
are supposed to optimize the same problem, but in fact all liblinear
estimators penalize the intercept, whereas libsvm
ones don't (IIRC). This leads to a different mathematical optimization problem and thus different results. There may also be other subtle differences such as scaling and default loss function (edit: make sure you set loss='hinge'
in LinearSVC
). Next, in multiclass classification, liblinear
does one-vs-rest by default whereas libsvm
does one-vs-one.
SGDClassifier(loss='hinge')
is different from the other two in the sense that it uses stochastic gradient descent and not exact gradient descent and may not converge to the same solution. However the obtained solution may generalize better.
Between SVC
and LinearSVC
, one important decision criterion is that LinearSVC
tends to be faster to converge the larger the number of samples is. This is due to the fact that the linear kernel is a special case, which is optimized for in Liblinear, but not in Libsvm.
The actual problem is in the problem with scikit approach, where they call SVM something which is not SVM. LinearSVC is actually minimizing squared hinge loss, instead of just hinge loss, furthermore, it penalizes size of the bias (which is not SVM), for more details refer to other question: Under what parameters are SVC and LinearSVC in scikit-learn equivalent?
So which one to use? It is purely problem specific. As due to no free lunch theorem it is impossible to say "this loss function is best, period". Sometimes squared loss will work better, sometimes normal hinge.