How to apply Guided BackProp in Tensorflow 2.0?

First of all, you have to change the computation of the gradient through a ReLU, i.e. Guided BackProp Formula

Here a graphic example from the paper.Graphical example

This formula can be implemented with the following code:

@tf.RegisterGradient("GuidedRelu")
def _GuidedReluGrad(op, grad):
   gate_f = tf.cast(op.outputs[0] > 0, "float32") #for f^l > 0
   gate_R = tf.cast(grad > 0, "float32") #for R^l+1 > 0
   return gate_f * gate_R * grad

Now you have to override the original TF implementation of ReLU with:

with tf.compat.v1.get_default_graph().gradient_override_map({'Relu': 'GuidedRelu'}):
   #put here the code for computing the gradient

After computing the gradient, you can visualize the result. However, one last remark. You compute a visualization for a single class. This means, you take the activation of a choosen neuron and set all the activations of the other neurons to zero for the input of Guided BackProp.