Choosing from different cost function and activation function of a neural network
I will answer your questions a little bit out of order, starting with more general answers, and finishing with those specific to your particular experiment.
Activation functions Different activation functions, in fact, do have different properties. Let's first consider an activation function between two layers of a neural network. The only purpose of an activation function there is to serve as an nonlinearity. If you do not put an activation function between two layers, then two layers together will serve no better than one, because their effect will still be just a linear transformation. For a long while people were using sigmoid function and tanh, choosing pretty much arbitrarily, with sigmoid being more popular, until recently, when ReLU became the dominant nonleniarity. The reason why people use ReLU between layers is because it is non-saturating (and is also faster to compute). Think about the graph of a sigmoid function. If the absolute value of x
is large, then the derivative of the sigmoid function is small, which means that as we propagate the error backwards, the gradient of the error will vanish very quickly as we go back through the layers. With ReLU the derivative is 1
for all positive inputs, so the gradient for those neurons that fired will not be changed by the activation unit at all and will not slow down the gradient descent.
For the last layer of the network the activation unit also depends on the task. For regression you will want to use the sigmoid or tanh activation, because you want the result to be between 0 and 1. For classification you will want only one of your outputs to be one and all others zeros, but there's no differentiable way to achieve precisely that, so you will want to use a softmax to approximate it.
Your example. Now let's look at your example. Your first example tries to compute the output of AND
in a following form:
sigmoid(W1 * x1 + W2 * x2 + B)
Note that W1
and W2
will always converge to the same value, because the output for (x1
, x2
) should be equal to the output of (x2
, x1
). Therefore, the model that you are fitting is:
sigmoid(W * (x1 + x2) + B)
x1 + x2
can only take one of three values (0, 1 or 2) and you want to return 0
for the case when x1 + x2 < 2
and 1 for the case when x1 + x2 = 2
. Since the sigmoid function is rather smooth, it will take very large values of W
and B
to make the output close to the desired, but because of a small learning rate they can't get to those large values fast. Increasing the learning rate in your first example will increase the speed of convergence.
Your second example converges better because the softmax
function is good at making precisely one output be equal to 1
and all others to 0
. Since this is precisely your case, it does converge quickly. Note that sigmoid
would also eventually converge to good values, but it will take significantly more iterations (or higher learning rate).
What to use. Now to the last question, how does one choose which activation and cost functions to use. These advices will work for majority of cases:
If you do classification, use
softmax
for the last layer's nonlinearity andcross entropy
as a cost function.If you do regression, use
sigmoid
ortanh
for the last layer's nonlinearity andsquared error
as a cost function.Use ReLU as a nonlienearity between layers.
Use better optimizers (
AdamOptimizer
,AdagradOptimizer
) instead ofGradientDescentOptimizer
, or use momentum for faster convergence,