This training strategy generates multiple Rbm training configurations at
random, and selects the Rbm which has best reconstruction error on a held
out validation set. This strategy differs from the
RandomRetryTraininStrategy in that the training of a new random RBM can
be interrupted much earlier, if it the progress of the currently trained
Rbm does not look very promising compared to the best Rbm seen so far.
This training strategy generates multiple Rbm training configurations at random, and selects the Rbm which has best reconstruction error on a held out validation set. This strategy differs from the RandomRetryTraininStrategy in that the training of a new random RBM can be interrupted much earlier, if it the progress of the currently trained Rbm does not look very promising compared to the best Rbm seen so far.