Tensor indexing in custom loss function
Often you work just with backend functions, and you never try to know the actual values of the tensors.
from keras.losses import mean_square_error
def new_mse(y_true,y_pred):
#swapping elements 1 and 3 - concatenate slices of the original tensor
swapped = K.concatenate([y_pred[:1],y_pred[3:],y_pred[2:3],y_pred[1:2]])
#actually, if the tensors are shaped like (batchSize,4), use this:
#swapped = K.concatenate([y_pred[:,:1],y_pred[:,3:],y_pred[:,2:3],Y_pred[:,1:2])
#losses
regularLoss = mean_squared_error(y_true,y_pred)
swappedLoss = mean_squared_error(y_true,swapped)
#concat them for taking a min value
concat = K.concatenate([regularLoss,swappedLoss])
#take the minimum
return K.min(concat)
So, for your items:
You're totally right. Avoid numpy at all costs in tensor operations (loss functions, activations, custom layers, etc.)
A
K.shape()
is also a tensor. It probably has shape (2,) because it has two values, one value will be 7032, the other value will be 6. But you can only see these values when you eval this tensor. Doing this inside loss functions is often a bad idea.