How should "BatchNorm" layer be used in caffe?
After each BatchNorm, we have to add a Scale layer in Caffe. The reason is that the Caffe BatchNorm layer only subtracts the mean from the input data and divides by their variance, while does not include the γ and β parameters that respectively scale and shift the normalized distribution 1. Conversely, the Keras BatchNormalization layer includes and applies all of the parameters mentioned above. Using a Scale layer with the parameter “bias_term” set to True in Caffe, provides a safe trick to reproduce the exact behavior of the Keras version. https://www.deepvisionconsulting.com/from-keras-to-caffe/
If you follow the original paper, the Batch normalization should be followed by Scale and Bias layers (the bias can be included via the Scale, although this makes the Bias parameters inaccessible). use_global_stats
should also be changed from training (False) to testing/deployment (True) - which is the default behavior. Note that the first example you give is a prototxt for deployment, so it is correct for it to be set to True.
I'm not sure about the shared parameters.
I made a pull request to improve the documents on the batch normalization, but then closed it because I wanted to modify it. And then, I never got back to it.
Note that I think lr_mult: 0
for "BatchNorm"
is no longer required (perhaps not allowed?), although I'm not finding the corresponding PR now.