org.kramerlab.autoencoder.neuralnet

NeuralNetLike

trait NeuralNetLike[+Repr <: NeuralNetLike[Repr]] extends (Mat) ⇒ Mat with Visualizable

Implementation trait for all subclasses of neural net. Implements everything that is necessary to perform parameter optimization (feed forward, backpropagation, minimization with non-linear conjugate gradient or similar algorithm).

For those not familiar with the "-Like"-implementation trait pattern: it's main purpose is to ensure that the results of optimization have the right type for all subclasses of neural net (so that one gets an optimized Autoencoder after running the minimization, not just some abstract neural net). For this, it keeps the information about the concrete representation type Repr.

Subclasses that inherit the backpropagation functionality from this implementation trait have to provide a build method, which takes a list of Layers and returns the right flavor of neural net built from those layers.

Self Type
Repr
Linear Supertypes
Visualizable, (Mat) ⇒ Mat, AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. NeuralNetLike
  2. Visualizable
  3. Function1
  4. AnyRef
  5. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Abstract Value Members

  1. abstract def build(layers: List[Layer]): Repr

    Builds a neural net of the right type and of the right shape out of specified layers.

    Builds a neural net of the right type and of the right shape out of specified layers.

    Note that this method depends on instance, not just a class: fore example the Autoencoder has to know what it's 'central' Layer is.

  2. abstract def layers: List[Layer]

    Enumerates layers of this (linear) neural net.

    Enumerates layers of this (linear) neural net.

    TODO: generalize it to arbitrary directed acyclic graphs, what's so special about lists?...

Concrete Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. def activities(input: Mat): List[Mat]

  7. def andThen[A](g: (Mat) ⇒ A): (Mat) ⇒ A

    Definition Classes
    Function1
    Annotations
    @unspecialized()
  8. def apply(input: Mat): Mat

    Propagates the input from the visible layer up to the top layer

    Propagates the input from the visible layer up to the top layer

    Definition Classes
    NeuralNetLike → Function1
  9. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  10. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  11. def compose[A](g: (A) ⇒ Mat): (A) ⇒ Mat

    Definition Classes
    Function1
    Annotations
    @unspecialized()
  12. var dataSample: Option[Mat]

  13. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  14. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  15. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  16. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  17. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  18. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  19. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  20. final def notify(): Unit

    Definition Classes
    AnyRef
  21. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  22. def optimize(input: Mat, output: Mat, errorFunctionFactory: DifferentiableErrorFunctionFactory[Mat] = SquareErrorFunctionFactory, relativeValidationSetSize: Double, maxEvals: Int, trainingObservers: List[TrainingObserver]): Repr

    Performs optimization of all parameters of the neural network using the specified input and output, the specified method to define an error function (defaults to SquareErrorFunctionFactory).

    Performs optimization of all parameters of the neural network using the specified input and output, the specified method to define an error function (defaults to SquareErrorFunctionFactory).

    Standard feed-forward algorithm is used for evaluation of the function, backpropagation is used for calculation of the gradient.

  23. def prependAffineLinearTransformation(factor: Mat, offset: Mat): Repr

    Assumes that this is a "usual" neural net with alternating unit and connection layers and prepends an affine linear transformation to it.

    Assumes that this is a "usual" neural net with alternating unit and connection layers and prepends an affine linear transformation to it.

    Why the heck did I implement biased layers at all, why didn't I stuff all this cruft into something like "AffineLinearTransform" or so... Damn

  24. def reverse(output: Mat): Mat

    Propagates the output from top layer down to the visible layer

  25. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  26. def toImage: BufferedImage

    Definition Classes
    NeuralNetLikeVisualizable
  27. def toImage(w: Int, h: Int): BufferedImage

    Definition Classes
    Visualizable
  28. def toImage(colormap: (Double) ⇒ Int): BufferedImage

    Definition Classes
    Visualizable
  29. def toString(): String

    Definition Classes
    Function1 → AnyRef → Any
  30. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  31. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  32. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Visualizable

Inherited from (Mat) ⇒ Mat

Inherited from AnyRef

Inherited from Any

Ungrouped