Quasi-Newton method

From Wikipedia, the free encyclopedia

In optimization, quasi-Newton methods (also known as variable metric methods) are well-known algorithms for finding local maxima and minima of functions. Quasi-Newton methods are based on Newton's method to find the stationary point of a function, where the gradient is 0. Newton's method assumes that the function can be locally approximated as a quadratic in the region around the optimum, and use the first and second derivatives (gradient and Hessian) to find the stationary point.

In Quasi-Newton methods the Hessian matrix of second derivatives of the function to be minimized does not need to be computed. The Hessian is updated by analyzing successive gradient vectors instead. Quasi-Newton methods are a generalization of the secant method to find the root of the first derivative for multidimensional problems. In multi-dimensions the secant equation is under-determined, and quasi-Newton methods differ in how they constrain the solution, typically by adding a simple low-rank update to the current estimate of the Hessian.

The first quasi-Newton algorithm was proposed by W.C. Davidon, a physicist working at Argonne National Laboratory. Like the story of QuickSort, Davidon got the trouble of applying optimization algorithms on his research, thus he decided to improve such algorithm. Eventually he developed the first quasi-Newton algorithm in 1959: the DFP updating formula, which was later popularized by Fletcher and Powell in 1963, but is rarely used today. The most common quasi-Newton algorithms are currently the SR1 formula (for symmetric rank one) and the widespread BFGS method, that was suggested independently by Broyden, Fletcher, Goldfarb, and Shanno, in 1970. The Broyden's class is a linear combination of the DFP and BFGS methods.

The SR1 formula does not guarantee the update matrix to maintain positive-definiteness and can be used for indefinite problems. The Broyden's method does not require the update matrix to be symmetric and it is used to find the root of a general system of equations (rather than the gradient) by updating the Jacobian (rather than the Hessian).


[edit] Description of the method

As in the Newton's method, one uses the second order approximation to find the minimum of a function f(x). The Taylor series of f(x) is:

f(x_0+\Delta x)=f(x_0)+\nabla f(x_0)^T \Delta x+\frac{1}{2} \Delta x^T {B} \Delta x ,

where (\nabla f) is the gradient and B the Hessian matrix. The Taylor series of the gradient itself:

\nabla f(x_0+\Delta x)=\nabla f(x_0)+B \Delta x,

is called secant equation. Solving for \nabla f(x_0+\Delta x_0)=0 provides the Newton step:

\Delta x_0=-B^{-1}\nabla f(x_0),

but B is unknown. In one dimension, solving for B and applying the Newton's step with the updated value is equivalent to the secant method. In multidimensions B is under determined. Various methods are used to find the solution to the secant equation that is symmetric (BT = B) and closest to the current approximate value Bk according to some metric minB | | BBk | | . An approximate initial value of B0 = I is often sufficient to achieve rapid convergence. The unknown xk is updated applying the Newton's step calculated using the current approximate Hessian matrix Bk

  • \Delta x_k=- \alpha_k B_k^{-1}\nabla f(x_k), with α chosen to satisfy the Wolfe conditions;
  • xk + 1 = xk + Δxk;
  • The gradient computed at the new point \nabla f(x_{k+1}), and
y_k=\nabla f(x_{k+1})-\nabla f(x_{k}),

The most popular update formulas are:

Method \displaystyle B_{k+1}= H_{k+1}=B_{k+1}^{-1}=
DFP \left (I-\frac {y_k \Delta x_k^T} {y_k^T \Delta x_k} \right ) B_k \left (I-\frac {\Delta x_k y_k^T} {y_k^T \Delta x_k} \right )+\frac 
{y_k y_k^T} {y_k^T \Delta x_k}  H_k + \frac {\Delta x_k \Delta x_k^T}{y_k^{T} \Delta x_k} - \frac {H_k y_k y_k^T H_k^T} {y_k^{T} H_k y_k}
BFGS  B_k + \frac {y_k y_k^T}{y_k^{T} \Delta x_k} - \frac {B_k \Delta x_k (B_k \Delta x_k)^T} {\Delta x_k^{T} B_k \Delta x_k}  \left (I-\frac {y_k \Delta x_k^T} {y_k^T \Delta x_k} \right )^T H_k \left (I-\frac { y_k \Delta x_k^T} {y_k^T \Delta x_k} \right )+\frac 
{\Delta x_k \Delta x_k^T} {y_k^T \Delta x_k}
Broyden  B_k+\frac {y_k-B_k \Delta x_k}{\Delta x_k^T\Delta x_k} \Delta x_k^T
Broyden Family (1-\varphi_k) B_{k+1}^{BFGS}+ \varphi_k B_{k+1}^{DFP}, \varphi\in[0,1]
SR1 B_{k}+\frac {(y_k-B_k \Delta x_k) (y_k-B_k \Delta x_k)^T}{(y_k-B_k \Delta x_k)^T \Delta x_k} H_{k}+\frac {(\Delta x_k-H_k y_k) (\Delta x_k-H_k y_k)^T}{(\Delta x_k-H_k y_k)^T y_k}

[edit] See also

[edit] References

  • Eventually W.C. Davidon's paper published. William C. Davidon, Variable Metric Method for Minimization, SIOPT Volume 1 Issue 1, Pages 1-17, 1991.
  • Nocedal, Jorge & Wright, Stephen J. (1999). Numerical Optimization. Springer-Verlag. ISBN 0-387-98793-2.
  • Edwin K.P.Chong and Stanislaw H.Zak, An Introduction to Optimization 2ed, John Wiley & Sons Pte. Ltd. August 2001.
Languages