hybrid of max pooling and average pooling
I now use a different solution for combining both pooling variations.
- give the tensor to both pooling functions
- concatenate the results
- use a small conv layer to learn how to combine
This approach, of course, has a higher computational cost but is also more flexible. The conv layer after the concatenation can learn to simply blend the two pooling results with an alpha, but it can also end up using different alphas for different features and of course - as conv layers do - combine the pooled features in a completely new way.
The code (Keras functional API) looks as follows:
import numpy as np
from tensorflow.keras.layers import Input, MaxPooling2D, Conv2D
from tensorflow.keras.layers import Concatenate, AveragePooling2D
from tensorflow.keras.models import Model
# implementation of the described custom pooling layer
def hybrid_pool_layer(pool_size=(2,2)):
def apply(x):
return Conv2D(int(x.shape[-1]), (1, 1))(
Concatenate()([
MaxPooling2D(pool_size)(x),
AveragePooling2D(pool_size)(x)]))
return apply
# usage example
inputs = Input(shape=(256, 256, 3))
x = inputs
x = Conv2D(8, (3, 3))(x)
x = hybrid_pool_layer((2,2))(x)
model = Model(inputs=inputs, outputs=x)
model.compile(loss='categorical_crossentropy', optimizer='nadam')
Surely one could also leave out the Conv2D
and just return the concatenation of the two poolings and let the next layer do the merging work. But the implementation above makes sure that tensor resulting from this hybrid pooling has the shape one would also expect from a normal single pooling operation.