org.kramerlab.autoencoder.neuralnet.rbm

Rbm

class Rbm extends NeuralNet with NeuralNetLike[Rbm]

Represents a restricted Boltzmann machine, which consists of two layers of neurons connected with a full bipartite graph.

Linear Supertypes
NeuralNet, Serializable, Serializable, NeuralNetLike[Rbm], Visualizable, (Mat) ⇒ Mat, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. Rbm
  2. NeuralNet
  3. Serializable
  4. Serializable
  5. NeuralNetLike
  6. Visualizable
  7. Function1
  8. AnyRef
  9. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new Rbm(visible: RbmLayer, connection: FullBipartiteConnection, hidden: RbmLayer)

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. def activities(input: Mat): List[Mat]

    Definition Classes
    NeuralNetLike
  7. def andThen[A](g: (Mat) ⇒ A): (Mat) ⇒ A

    Definition Classes
    Function1
    Annotations
    @unspecialized()
  8. def apply(input: Mat): Mat

    Propagates the input from the visible layer up to the top layer

    Propagates the input from the visible layer up to the top layer

    Definition Classes
    NeuralNetLike → Function1
  9. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  10. def build(ls: List[Layer]): Rbm

    Builds a neural net of the right type and of the right shape out of specified layers.

    Builds a neural net of the right type and of the right shape out of specified layers.

    Note that this method depends on instance, not just a class: fore example the Autoencoder has to know what it's 'central' Layer is.

    Definition Classes
    RbmNeuralNetNeuralNetLike
  11. def clone(): Rbm

    Definition Classes
    Rbm → AnyRef
  12. def compose[A](g: (A) ⇒ Mat): (A) ⇒ Mat

    Definition Classes
    Function1
    Annotations
    @unspecialized()
  13. def confabulate(hiddenActivation: Mat, sampleVisibleUnitsDeterministically: Boolean = false): Mat

    Given hidden activation, samples a hidden state, calculates visible activation, and samples visible state.

    Given hidden activation, samples a hidden state, calculates visible activation, and samples visible state.

    If sampleVisibleStatesDeterministically is set to true, activations of visible neurons are returned directly, no random sampling occurs for visible neurons in this case.

  14. val connection: FullBipartiteConnection

  15. def contrastiveDivergence(minibatch: Mat, steps: Int = 1, sampleVisibleUnitsDeterministically: Boolean = false): (Mat, Mat, Mat)

    Calculates (very coarse) approximations of partial derivatives of the logarithmized probability of the minibatch w.

    Calculates (very coarse) approximations of partial derivatives of the logarithmized probability of the minibatch w.r.t. biases of the unit layers and weights of the connection layer.

    returns

    tuple containing three matrices that correspond to derivatives w.r.t. biases of visible units, weights, and hidden units respectively.

    Attributes
    protected
  16. var dataSample: Option[Mat]

    Definition Classes
    NeuralNetLike
  17. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  18. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  19. def extractStatistics(visibleStates: Mat, hiddenActivations: Mat): (Mat, Mat, Mat)

    Extracts average visible unit state, average hidden unit activation, and average products of visible states and hidden activations from the samples of visible units and hidden activations.

    Extracts average visible unit state, average hidden unit activation, and average products of visible states and hidden activations from the samples of visible units and hidden activations.

    The dimensions of extracted matrices correspond to dimensions of visible biases, weight Mat, and hidden biases respectively.

    Attributes
    protected
  20. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  21. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  22. def gibbsSampling(visibleStates: Mat, steps: Int = 1, sampleVisibleUnitsDeterministically: Boolean = false): (Mat, Mat)

    Performs Gibbs sampling steps times, starting with the input clamped to the visible layer.

    Performs Gibbs sampling steps times, starting with the input clamped to the visible layer.

    If steps is zero, the visible input together with hidden layer activation is returned (useful for collecting positive statistics in contrastive divergence).

    If steps is greater than zero, returns visible reconstruction and exact hidden layer activations after specified number of steps.

    If sampleVisibleUnitsDeterministically is set to true, then activations of the visible units are used instead of random samples.

    The hidden units are always updated randomly.

  23. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  24. val hidden: RbmLayer

  25. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  26. val layers: List[Layer]

    Enumerates layers of this (linear) neural net.

    Enumerates layers of this (linear) neural net.

    TODO: generalize it to arbitrary directed acyclic graphs, what's so special about lists?...

    Definition Classes
    NeuralNetNeuralNetLike
  27. def layersToString(layers: List[Layer]): String

    Attributes
    protected
    Definition Classes
    NeuralNet
  28. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  29. final def notify(): Unit

    Definition Classes
    AnyRef
  30. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  31. def optimize(input: Mat, output: Mat, errorFunctionFactory: DifferentiableErrorFunctionFactory[Mat] = SquareErrorFunctionFactory, relativeValidationSetSize: Double, maxEvals: Int, trainingObservers: List[TrainingObserver]): Rbm

    Performs optimization of all parameters of the neural network using the specified input and output, the specified method to define an error function (defaults to SquareErrorFunctionFactory).

    Performs optimization of all parameters of the neural network using the specified input and output, the specified method to define an error function (defaults to SquareErrorFunctionFactory).

    Standard feed-forward algorithm is used for evaluation of the function, backpropagation is used for calculation of the gradient.

    Definition Classes
    NeuralNetLike
  32. def prependAffineLinearTransformation(factor: Mat, offset: Mat): Rbm

    Assumes that this is a "usual" neural net with alternating unit and connection layers and prepends an affine linear transformation to it.

    Assumes that this is a "usual" neural net with alternating unit and connection layers and prepends an affine linear transformation to it.

    Why the heck did I implement biased layers at all, why didn't I stuff all this cruft into something like "AffineLinearTransform" or so... Damn

    Definition Classes
    NeuralNetLike
  33. def reinitialize(config: RbmTrainingConfiguration): Rbm

  34. def reverse(output: Mat): Mat

    Propagates the output from top layer down to the visible layer

    Propagates the output from top layer down to the visible layer

    Definition Classes
    NeuralNetLike
  35. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  36. def toImage: BufferedImage

    Definition Classes
    NeuralNetLikeVisualizable
  37. def toImage(w: Int, h: Int): BufferedImage

    Definition Classes
    Visualizable
  38. def toImage(colormap: (Double) ⇒ Int): BufferedImage

    Definition Classes
    Visualizable
  39. def toString(): String

    Definition Classes
    RbmNeuralNet → Function1 → AnyRef → Any
  40. def train[Fitness](trainingSet: Mat, configuration: RbmTrainingConfiguration, trainingObservers: List[TrainingObserver] = Nil, terminationCriterion: TerminationCriterion[Rbm, Int], resultSelector: ResultSelector[Rbm, Fitness])(implicit arg0: Ordering[Fitness]): Rbm

    Trains this Rbm with the data contained in the minibatches with parameters specified in the configuration.

    Trains this Rbm with the data contained in the minibatches with parameters specified in the configuration.

    Returns the training data processed by the trained Rbm

  41. val visible: RbmLayer

  42. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  43. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  44. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from NeuralNet

Inherited from Serializable

Inherited from Serializable

Inherited from NeuralNetLike[Rbm]

Inherited from Visualizable

Inherited from (Mat) ⇒ Mat

Inherited from AnyRef

Inherited from Any

Ungrouped