Conjugate Gradient-Descent Algorithm
A Conjugate Gradient-Descent Algorithm is a numerical analysis algorithm for system of linear equations whose matrix is a Symmetric Positive-Definite Matrix.
- Context:
- It can be applied by a Conjugate Gradient-Descent System (that solves a conjugate gradient-descent task).
- It can range from being a Linear Conjugate Gradient-Descent Algorithm to being a Non-Linear Conjugate Gradient-Descent Algorithm.
- …
- Counter-Example(s):
- See: Gradient Descent Algorithm, System of Linear Equations, Symmetric Matrix, Positive-Definite Matrix, Iterative Method, Sparse Matrix, Cholesky Decomposition, Partial Differential Equation, Mathematical Optimization.
References
2015
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/conjugate_gradient_method Retrieved:2015-6-24.
- In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.
The conjugate gradient method can also be used to solve unconstrained optimization problems such as energy minimization. It was mainly developed by Magnus Hestenes and Eduard Stiefel.
The biconjugate gradient method provides a generalization to non-symmetric matrices. Various nonlinear conjugate gradient methods seek minima of nonlinear equations.
- In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.
2014
- Noel Black, Shirley Moore and Eric W. Weisstein. “Conjugate Gradient Method." From MathWorld Retrieved:2014-5-12.
- QUOTE: The conjugate gradient method is an algorithm for finding the nearest local minimum of a function of n variables which presupposes that the gradient of the function can be computed. It uses conjugate directions instead of the local gradient for going downhill. If the vicinity of the minimum has the shape of a long, narrow valley, the minimum is reached in far fewer steps than would be the case using the method of steepest descent.
For a discussion of the conjugate gradient method on vector and shared memory computers, see [[Dongarra et al. (1991)]]. For discussions of the method for more general parallel architectures, see Demmel et al. (1993) and Ortega (1988) and the references therein.
- QUOTE: The conjugate gradient method is an algorithm for finding the nearest local minimum of a function of n variables which presupposes that the gradient of the function can be computed. It uses conjugate directions instead of the local gradient for going downhill. If the vicinity of the minimum has the shape of a long, narrow valley, the minimum is reached in far fewer steps than would be the case using the method of steepest descent.