org.kramerlab

autoencoder

package autoencoder

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. autoencoder
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Value Members

  1. val HintonsMiraculousStrategy: () ⇒ RbmTrainingStrategy

  2. val Linear: Int

  3. val NoObservers: List[TrainingObserver]

    Empty list of observers for convenient method calls from java-code

  4. val NoPretraining: () ⇒ RbmTrainingStrategy

    Bunch of different pretraining strategy factories

  5. val RandomRetryStrategy: () ⇒ RbmTrainingStrategy

  6. val Sigmoid: Int

  7. val TournamentStrategy: () ⇒ RbmTrainingStrategy

  8. def deepAutoencoderStream(layerType: Int, maxDepth: Int, hiddenLayerDims: Seq[Int], data: Mat, useL2Error: Boolean, pretrainingStrategyFactory: () ⇒ RbmTrainingStrategy, finetuneInnerLayers: Boolean, trainingObservers: List[TrainingObserver]): Stream[Autoencoder]

    Our deviation from Hinton's training procedure, based on the idea of successive fine-tuning of "almost-isomorphisms".

  9. def deepAutoencoderStream_java(layerType: Int, maxDepth: Int, compressionFactor: Double, data: Mat, useL2Error: Boolean, pretrainingStrategyFactory: () ⇒ RbmTrainingStrategy, finetuneInnerLayers: Boolean, trainingObservers: List[TrainingObserver]): Iterable[Autoencoder]

    Same as deepAutoencoderStream, but with a Java-Iterable as return type.

  10. package demo

  11. package experiments

  12. def layerDims(numVis: Int, numHid: Int, n: Int, alpha: Double): List[Int]

    This method returns dimensions of layers for specified input dimension, hidden layer dimension, number of layers, and a parameter alpha, which determines, the "convexity" of the [layer-index -> layer-size] function (alpha = 1 corresponds to linear interpolation between number of visible and number of hidden units, alpha < 1 corresponds to "slim" networks, alpha > 1 corresponds to "fat" networks).

  13. package math

  14. def mkLayer(layerType: Int, layerDim: Int): RbmLayer with BiasedUnitLayer { ... /* 2 definitions in type refinement */ }

    Attributes
    protected
  15. package mnist

  16. package neuralnet

  17. def setParallelismGlobally(numThreads: Int): Unit

    Sets number of threads in the thread pool for all parallel collections globally.

  18. def trainAutoencoder(data: Mat, compressionDimension: Int, numberOfHiddenLayers: Int, useL2Error: Boolean, trainingStrategyFactory: () ⇒ RbmTrainingStrategy, observers: List[TrainingObserver]): Autoencoder

    Trains a single autoencoder with the algorithm proposed by Hinton.

    Trains a single autoencoder with the algorithm proposed by Hinton.

    data

    input data with one instance per row

    compressionDimension

    dimension of the central layer

    numberOfHiddenLayers

    number of hidden layers between the input and the central bottleneck

    useL2Error

    whether to use L2 or Cross-Entropy error. If you aren't sure what you need, set it to true

    trainingStrategyFactory

    pick one of the predefined. If you don't know which one you need: pick HintonsMiraculousStrategy if you need it a little faster, or TournamentStrategy if you need it a little more accurate

    observers

    list of training observers that can be used to display information about the training progress. Use NoObservers if you don't need it.

  19. def trainAutoencoder_Stream(data: Mat, compressionDimension: Int, numberOfHiddenLayers: Int, useL2Error: Boolean, trainingStrategyFactory: () ⇒ RbmTrainingStrategy, observers: List[TrainingObserver]): Autoencoder

    Trains a single Autoencoder using our Autoencoder-Stream strategy.

    Trains a single Autoencoder using our Autoencoder-Stream strategy.

    data

    input data with one instance per row

    compressionDimension

    dimension of the central layer

    numberOfHiddenLayers

    number of hidden layers between the input and the central bottleneck

    useL2Error

    whether to use L2 or Cross-Entropy error. If you aren't sure what you need, set it to true

    trainingStrategyFactory

    pick one of the predefined. If you don't know which one you need: pick HintonsMiraculousStrategy if you need it a little faster, or TournamentStrategy if you need it a little more accurate

    observers

    list of training observers that can be used to display information about the training progress. Use NoObservers if you don't need it.

  20. package visualization

Inherited from AnyRef

Inherited from Any

Ungrouped