Builds a neural net of the right type and of the right shape out of specified layers.
Builds a neural net of the right type and of the right shape out of specified layers.
Note that this method depends on instance, not just a class:
fore example the Autoencoder has to know what it's 'central'
Layer is.
Enumerates layers of this (linear) neural net.
Enumerates layers of this (linear) neural net.
TODO: generalize it to arbitrary directed acyclic graphs, what's so special about lists?...
Propagates the input from the visible layer up to the top layer
Propagates the input from the visible layer up to the top layer
Performs optimization of all parameters of the neural network
using the specified input and output, the specified method
to define an error function (defaults to SquareErrorFunctionFactory).
Performs optimization of all parameters of the neural network
using the specified input and output, the specified method
to define an error function (defaults to SquareErrorFunctionFactory).
Standard feed-forward algorithm is used for evaluation of the function, backpropagation is used for calculation of the gradient.
Assumes that this is a "usual" neural net with alternating unit and connection layers and prepends an affine linear transformation to it.
Assumes that this is a "usual" neural net with alternating unit and connection layers and prepends an affine linear transformation to it.
Why the heck did I implement biased layers at all, why didn't I stuff all this cruft into something like "AffineLinearTransform" or so... Damn
Propagates the output from top layer down to the visible layer
Implementation trait for all subclasses of neural net. Implements everything that is necessary to perform parameter optimization (feed forward, backpropagation, minimization with non-linear conjugate gradient or similar algorithm).
For those not familiar with the "-Like"-implementation trait pattern: it's main purpose is to ensure that the results of optimization have the right type for all subclasses of neural net (so that one gets an optimized
Autoencoderafter running the minimization, not just some abstract neural net). For this, it keeps the information about the concrete representation typeRepr.Subclasses that inherit the backpropagation functionality from this implementation trait have to provide a
buildmethod, which takes a list ofLayers and returns the right flavor of neural net built from those layers.