How to train cascade properly

If you have a small number of data, you need less number of stages to achieve the required false alarm rate you set up. This means that the cascade classifier is "good enough" so it doesn't have to grow further. The total false positive ratio is actually multiplied by every stage's ratio, so after a point, the value is achieved.

In your options you set it up to 0.9. Consider making it higher, like 0.95 or more.

Apart from that, your datasets are small, so it's easier for the algorithm to get good results when validating on them during training. The smaller the dataset, the easier for the classifier to be trained, so less stages are required. But this doesn't mean that it's better when running on real data. Also, if you keep the training size low and set a higher ratio, consider that the classifier will need more stages to finish and will be more complicated, but it's very possible that it will be over-trained on the training set.

To conclude, if the nature of your positive and negatives that you have, is making them easy to seperate, then you don't need so many samples. Of course that depends on what you are training the algorithm for. With your amount of samples, the 10 stages you put are a lot, so the algorithm terminates earlier (it's not necessarily bad).

When I was training faces, I think I had around 1 thousand of positive (including all the rotations/deviations) and 2-3 thousands of negatives, to need a classifier of around 11-13 levels, if I remember correctly.

The tutorial of Naotoshi Neo had helped me a lot.

Also, what I noticed now, as Safir mentioned, you have too few negative samples comparing to the positive ones. The should be at least equal in number, preferably around 1.5 - 2 times more than the positives.


Number of negatives are too little comparing to number of positives and number of stages.


You set maxFalseAlarmRate=0.9.
This means that in each stage no more than 90% of the 40 negative samples (ie 36 samples) should lie inside the boundary of positives. When the algorithm manages to put outside that boundary at least 4 samples, it can go to the next stage.
This worked for a few stages, until it happened (by mere chance) that less than 36 samples are already inside the positive boundary since the beginning (remenber that negative samples extraction is a random process). So when the algorithm should operate the separation it has its job already done and it does not know how to procede.


I have achieved my goal and trained good cascade.

  1. First you need a couple of original samples (don't use one and multiply it with create samples). I have used 10 different photos of beer bottles, for each I have created 200 hundred samples, then I have combined all samples in one vector file with 2000 samples.
  2. -w 20 -h 35 should match aspect ratio of your original image
  3. Relation of positive samples to negative should be around 2:1 (there should be more positive samples)
  4. Number of stages you should chose by yourself (for me it is 12-13). The more stages you set the more precisely will be your cascade, but you can also overtrain your cascade and it won't find anything. The precision of your cascade is shown by acceptanceRatio on the last stage it should be around this value 0.000412662 or less.

But if you get acceptanceRatio like this 7.83885e-07 your cascade is probably overtrained and it wont find anything, try to set less stages.

!!! And one more important thing, when you train your cascade you should have more than one feature on your stage beginning from 2 or 3 stage. If you have only one feature you wont get good cascade. You should work on your training images (negative and positive samples). Normal training will look like this:

enter image description here

For training I have used -data imgs/cascade/ -vec imgs/vector.vec -bg imgs/negat.dat -numPos 1900 -numNeg 900 -numStages 12 -featureType HAAR -minHitRate 0.999 -maxFalseAlarmRate 0.5 -w 24 -h 30 command

Both features types work almost equals sometimes HAAR is a little bit better but it is significant slower than LBP.

Tags:

Opencv