How to understand SpatialDropout1D and when to use it?
To make it simple, I would first note that so-called feature maps (1D, 2D, etc.) is our regular channels. Let's look at examples:
Dropout()
: Let's define 2D input: [[1, 1, 1], [2, 2, 2]]. Dropout will consider every element independently, and may result in something like [[1, 0, 1], [0, 2, 2]]SpatialDropout1D()
: In this case result will look like [[1, 0, 1], [2, 0, 2]]. Notice that 2nd element was zeroed along all channels.
The noise shape
In order to understand SpatialDropout1D
, you should get used to the notion of the noise shape. In plain vanilla dropout, each element is kept or dropped independently. For example, if the tensor is [2, 2, 2]
, each of 8 elements can be zeroed out depending on random coin flip (with certain "heads" probability); in total, there will be 8 independent coin flips and any number of values may become zero, from 0
to 8
.
Sometimes there is a need to do more than that. For example, one may need to drop the whole slice along 0
axis. The noise_shape
in this case is [1, 2, 2]
and the dropout involves only 4 independent random coin flips. The first component will either be kept together or be dropped together. The number of zeroed elements can be 0
, 2
, 4
, 6
or 8
. It cannot be 1
or 5
.
Another way to view this is to imagine that input tensor is in fact [2, 2]
, but each value is double-precision (or multi-precision). Instead of dropping the bytes in the middle, the layer drops the full multi-byte value.
Why is it useful?
The example above is just for illustration and isn't common in real applications. More realistic example is this: shape(x) = [k, l, m, n]
and noise_shape = [k, 1, 1, n]
. In this case, each batch and channel component will be kept independently, but each row and column will be kept or not kept together. In other words, the whole [l, m]
feature map will be either kept or dropped.
You may want to do this to account for adjacent pixels correlation, especially in the early convolutional layers. Effectively, you want to prevent co-adaptation of pixels with its neighbors across the feature maps, and make them learn as if no other feature maps exist. This is exactly what SpatialDropout2D
is doing: it promotes independence between feature maps.
The SpatialDropout1D
is very similar: given shape(x) = [k, l, m]
it uses noise_shape = [k, 1, m]
and drops entire 1-D feature maps.
Reference: Efficient Object Localization Using Convolutional Networks by Jonathan Tompson at al.