Augmented Lagrangian Algorithm
An Augmented Lagrangian Algorithm is a Lagrangian optimization algorithm that replaces constrained optimization problems by unconstrained optimization problems with a term added to the objective function.
- Context:
- It can be represented as a Shifted Quadratic Penalty Function.
- …
- Example(s):
- Counter-Example(s):
- See: Linear Programming Algorithm, Constrained Optimization, Lagrange Multiplier.
References
2016
- (Wikipedia, 2016) ⇒ http://wikipedia.org/wiki/list_of_numerical_analysis_topics#Nonlinear_programming Retrieved:2016-4-4.
- … Augmented Lagrangian method — replaces constrained problems by unconstrained problems with a term added to the objective function …
2015
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Augmented_Lagrangian_method Retrieved:2015-11-5.
- Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems and add a penalty term to the objective; the difference is that the augmented Lagrangian method adds yet another term, designed to mimic a Lagrange multiplier. The augmented Lagrangian is not the same as the method of Lagrange multipliers.
Viewed differently, the unconstrained objective is the Lagrangian of the constrained problem, with an additional penalty term (the augmentation).
The method was originally known as the method of multipliers, and was studied much in the 1970 and 1980s as a good alternative to penalty methods. It was first discussed by Magnus Hestenes in 1969 [1] and by Powell in 1969. [2] The method was studied by R. Tyrrell Rockafellar in relation to Fenchel duality, particularly in relation to proximal-point methods, Moreau–Yosida regularization, and maximal monotone operators: These methods were used in structural optimization . The method was also studied by Dimitri Bertsekas, notably in his 1982 book, [3] together with extensions involving nonquadratic regularization functions, such as entropic regularization, which gives rise to the "exponential method of multipliers," a method that handles inequality constraints with a twice differentiable augmented Lagrangian function. Since the 1970s, sequential quadratic programming (SQP) and interior point methods (IPM) have had increasing attention, in part because they more easily use sparse matrix subroutines from numerical software libraries, and in part because IPMs have proven complexity results via the theory of self-concordant functions. The augmented Lagrangian method was rejuvenated by the optimization systems LANCELOT and AMPL, which allowed sparse matrix techniques to be used on seemingly dense but "partially separable" problems. The method is still useful for some problems.[4]
Around 2007, there was a resurgence of augmented Lagrangian methods in fields such as total-variation denoising and compressed sensing.
In particular, a variant of the standard augmented Lagrangian method that uses partial updates (similar to the Gauss-Seidel method for solving linear equations) known as the alternating direction method of multipliers or ADMM gained some attention.
- Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems and add a penalty term to the objective; the difference is that the augmented Lagrangian method adds yet another term, designed to mimic a Lagrange multiplier. The augmented Lagrangian is not the same as the method of Lagrange multipliers.
- ↑ M.R. Hestenes, "Multiplier and gradient methods", Journal of Optimization Theory and Applications, 4, 1969, pp. 303–320
- ↑ M.J.D. Powell, "A method for nonlinear constraints in minimization problems", in Optimization ed. by R. Fletcher, Academic Press, New York, NY, 1969, pp. 283–298.
- ↑ Dimitri P. Bertsekas, Constrained optimization and Lagrange multiplier methods, Athena Scientific, 1996 (first published 1982)
- ↑ , chapter 17
- http://web.stanford.edu/class/msande318/notes/notes09-BCL.pdf
- QUOTE: The augmented Lagrangian associated with problem NEC is :[math]\displaystyle{ L(x,y,\rho) = \phi(x) - y\T c(x) + \half \rho \norm{c(x)}^2. }[/math] It may be thought of as a modification to the Lagrangian, or as a shifted quadratic penalty function.
For a given [math]\displaystyle{ y }[/math] and [math]\displaystyle{ \rho }[/math], its derivatives [math]\displaystyle{ \nabla_x L }[/math] and [math]\displaystyle{ \nabla_{xx}^2 L }[/math] are
- QUOTE: The augmented Lagrangian associated with problem NEC is :[math]\displaystyle{ L(x,y,\rho) = \phi(x) - y\T c(x) + \half \rho \norm{c(x)}^2. }[/math] It may be thought of as a modification to the Lagrangian, or as a shifted quadratic penalty function.
[math]\displaystyle{ \begin{align} g_L(x,y,\rho) &= g_0(x) - J(x)^T \hat{y}, \label{eqn-AL1} \\ H_L(x,y,\rho) &= H_0(x) - {\textstyle\sum} \hat{y}_i H_i(x) + \rho J(x)^T J(x), \label{eqn-AL2} \\ \hat{y} &\equiv y - \rho c(x). \label{eqn-AL3} \end{align} }[/math]
- The augmented Lagrangian method for solving problem NEC proceeds by choosing [math]\displaystyle{ y }[/math] and [math]\displaystyle{ \rho }[/math] judiciously and then minimizing $L(x,y,\rho)$ as a function of [math]\displaystyle{ x }[/math]. The resulting [math]\displaystyle{ x }[/math] is used to choose a new [math]\displaystyle{ y }[/math] and [math]\displaystyle{ \rho }[/math], and the process repeats. The auxiliary vector $\yhat$ simplifies the above notation and proves to be useful in its own right.
2000
- (Fortin & Glowinski, 2000) ⇒ Michel Fortin, and Roland Glowinski. (2000). “Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-value Problems.” Elsevier,
1991
- (Conn et al., 1991) ⇒ Andrew R. Conn, Nicholas IM Gould, and Philippe Toint. (1991). “A Globally Convergent Augmented Lagrangian Algorithm for Optimization with General Constraints and Simple Bounds.” SIAM Journal on Numerical Analysis 28, no. 2