Activation function of the neurons
Calculates the derivative of the activation function
Color map for the activities
Color map for the activities
Optionally, one can specify how to reshape the neuron activities for visualization (height, width).
Optionally, one can specify how to reshape the neuron activities for visualization (height, width).
Calculates the gradient by pointwise multiplying the
backpropagated error
passed from above with the pointwise application of the derivative
function to the cachedInputPlusBias, and summing the rows.
Calculates the gradient by pointwise multiplying the
backpropagated error
passed from above with the pointwise application of the derivative
function to the cachedInputPlusBias, and summing the rows.
The matrix obtained before summing the rows is the new backpropagated
error.
error propagated from above, formatted the same way (one row for each example) as input and output
gradient (Layer-valued) and the next backpropagated error
Adds biases to each row of the input and applies the activation function pointwise.
Adds biases to each row of the input and applies the activation function pointwise.
Caches the input matrix with added biases in cachedInputPlusBias
The reversal is trivial, just
The reversal is trivial, just
Does exactly the same as the propagate method.
Does exactly the same as the propagate method.
Abstract layer representing a single row of units with a differentiable activation function and some biases (one bias value for each unit).
One only has to override
activationandderivativemethods, as well as the abstract methods inherited fromLayer(reparameterized), everything else is already implemented.