org.kramerlab.autoencoder.neuralnet

NeuralNetLike

object NeuralNetLike

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. NeuralNetLike
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Type Members

  1. case class ParameterVector[Repr <: NeuralNetLike[Repr]](net: Repr) extends VectorSpace[ParameterVector[Repr]] with Product with Serializable

    An ad hoc vector space structure on the parameters of the layers of the neural nets.

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. def differentiableComposition[Repr <: NeuralNetLike[Repr]](data: Mat, errorFunction: DifferentiableFunction[Mat]): DifferentiableFunction[ParameterVector[Repr]]

    Given a fixed data and a differentiable error function for matrix-valued outputs, returns a differentiable function that takes wrapped neural nets as arguments, and returns neural net valued gradients (again wrapped as ParameterVector).

    Given a fixed data and a differentiable error function for matrix-valued outputs, returns a differentiable function that takes wrapped neural nets as arguments, and returns neural net valued gradients (again wrapped as ParameterVector).

    For this, the argument neural net is composed with the error function and applied to the fixed data.

    Symbolically, if x is our neural net, d is the data, and E our error function, this method returns the function x => E(x(d)), which is evaluated in usual feed-forward manner. The gradient of this function is evaluated with the classical backpropagation algorithm. The whole composition might look somewhat odd, but keep in mind that we want a function that takes neural nets as inputs and returns the errors on data as outputs, and furthermore calculates neural-net-valued gradients, which simply store the gradients wrt. parameters of the neural net in a data structure that looks exactly like the neural net itself.

  9. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  11. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  12. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  13. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  14. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  15. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  16. final def notify(): Unit

    Definition Classes
    AnyRef
  17. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  18. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  19. def toString(): String

    Definition Classes
    AnyRef → Any
  20. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  21. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  22. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped