Sometimes it's possible to learn something about the problem while executing line searches, and to adjust some parameters.
Hybrid line search algorithm, which starts with the dumb & brutal backtracking algorithm, and switches to a faster strategy if it finds appropriate conditions.
Hybrid line search algorithm, which starts with the dumb & brutal backtracking algorithm, and switches to a faster strategy if it finds appropriate conditions.
Finds a local minimum for a differentiable function f using it's
values and gradient.
Finds a local minimum for a differentiable function f using it's
values and gradient.
If implemented, this minimization method can be used in order to
use the minimization process as a source of possible candidate solutions,
where the actual solution is not the one that minimizes f, but some
other point, that maximizes some other fitness function.
If implemented, this minimization method can be used in order to
use the minimization process as a source of possible candidate solutions,
where the actual solution is not the one that minimizes f, but some
other point, that maximizes some other fitness function. This can be very
useful for learning algorithms where the candidate solutions are tested
on a separate validation set in order to avoid overfitting. In general,
this method just throws UnsupportedOperationException.
Implementation of the conjugate gradient descent as described in the article "A new conjugate gradient method wyth guaranteed descent and an efficient line search" by William W. Hager. and Hongchao Zhang.
It uses Polak-Ribiere-Polyak-like update method for calculation of the next direction and inexact line search with approximate Wolfe conditions.
This implementation seems broke. It does not outperform naive gradient descent on fairly simple 2D-functions.