Propagates the input from the visible layer up to the top layer
Propagates the input from the visible layer up to the top layer
Builds a neural net of the right type and of the right shape out of specified layers.
Builds a neural net of the right type and of the right shape out of specified layers.
Note that this method depends on instance, not just a class:
fore example the Autoencoder has to know what it's 'central'
Layer is.
Enumerates layers of this (linear) neural net.
Enumerates layers of this (linear) neural net.
TODO: generalize it to arbitrary directed acyclic graphs, what's so special about lists?...
Performs optimization of all parameters of the neural network
using the specified input and output, the specified method
to define an error function (defaults to SquareErrorFunctionFactory).
Performs optimization of all parameters of the neural network
using the specified input and output, the specified method
to define an error function (defaults to SquareErrorFunctionFactory).
Standard feed-forward algorithm is used for evaluation of the function, backpropagation is used for calculation of the gradient.
Assumes that this is a "usual" neural net with alternating unit and connection layers and prepends an affine linear transformation to it.
Assumes that this is a "usual" neural net with alternating unit and connection layers and prepends an affine linear transformation to it.
Why the heck did I implement biased layers at all, why didn't I stuff all this cruft into something like "AffineLinearTransform" or so... Damn
Propagates the output from top layer down to the visible layer
Propagates the output from top layer down to the visible layer
creates a new autoencoder with an additional central layer
Stack of Rbm's that approximates the identity function