Pytorch custom activation functions?

There are four possibilities depending on what you are looking for. You will need to ask yourself two questions:

Q1) Will your activation function have learnable parameters?

If yes, you have no choice but to create your activation function as an nn.Module class because you need to store those weights.

If no, you are free to simply create a normal function, or a class, depending on what is convenient for you.

Q2) Can your activation function be expressed as a combination of existing PyTorch functions?

If yes, you can simply write it as a combination of existing PyTorch function and won't need to create a backward function which defines the gradient.

If no you will need to write the gradient by hand.

Example 1: SiLU function

The SiLU function f(x) = x * sigmoid(x) does not have any learned weights and can be written entirely with existing PyTorch functions, thus you can simply define it as a function:

def silu(x):
    return x * torch.sigmoid(x)

and then simply use it as you would have torch.relu or any other activation function.

Example 2: SiLU with learned slope

In this case you have one learned parameter, the slope, thus you need to make a class of it.

class LearnedSiLU(nn.Module):
    def __init__(self, slope = 1):
        super().__init__()
        self.slope = slope * torch.nn.Parameter(torch.ones(1))

    def forward(self, x):
        return self.slope * x * torch.sigmoid(x)

Example 3: with backward

If you have something for which you need to create your own gradient function, you can look at this example: Pytorch: define custom function


You can write a customized activation function like below (e.g. weighted Tanh).

class weightedTanh(nn.Module):
    def __init__(self, weights = 1):
        super().__init__()
        self.weights = weights

    def forward(self, input):
        ex = torch.exp(2*self.weights*input)
        return (ex-1)/(ex+1)

Don’t bother about backpropagation if you use autograd compatible operations.