Gauss-Newton Optimization Algorithm
(Redirected from Gauss-Newton Method)
Jump to navigation
Jump to search
A Gauss-Newton Optimization Algorithm is an optimization algorithm that ...
- Context:
- It can be used to solve Non-Linear Least Squares.
- …
- Counter-Example(s):
- See: Quasi-Newton Method, Newton's Method in Optimization.
References
2018a
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Gauss–Newton_algorithm Retrieved:2018-8-19.
- The Gauss–Newton algorithm is used to solve non-linear least squares problems. It is a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.
Non-linear least squares problems arise, for instance, in non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations.
The method is named after the mathematicians Carl Friedrich Gauss and Isaac Newton.
- The Gauss–Newton algorithm is used to solve non-linear least squares problems. It is a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.
2018b
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm#Related_algorithms Retrieved:2018-8-19.
- In a quasi-Newton method, such as that due to Davidon, Fletcher and Powell or Broyden–Fletcher–Goldfarb–Shanno (BFGS method) an estimate of the full Hessian [math]\displaystyle{ \frac{\partial^2 S}{\partial \beta_j \partial\beta_k} }[/math] is built up numerically using first derivatives [math]\displaystyle{ \frac{\partial r_i}{\partial\beta_j} }[/math] only so that after n refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss–Newton, Levenberg–Marquardt, etc. fits only to nonlinear least-squares problems.
Another method for solving minimization problems using only first derivatives is gradient descent. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.
- In a quasi-Newton method, such as that due to Davidon, Fletcher and Powell or Broyden–Fletcher–Goldfarb–Shanno (BFGS method) an estimate of the full Hessian [math]\displaystyle{ \frac{\partial^2 S}{\partial \beta_j \partial\beta_k} }[/math] is built up numerically using first derivatives [math]\displaystyle{ \frac{\partial r_i}{\partial\beta_j} }[/math] only so that after n refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss–Newton, Levenberg–Marquardt, etc. fits only to nonlinear least-squares problems.