Linear Least-Squares Regression Task

From GM-RKB
Jump to navigation Jump to search

A Linear Least-Squares Regression Task is a linear regression task whose objective function is a sum-of-squares function.



References

2017a

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Linear_least_squares_(mathematics) Retrieved:2017-8-27.
    • In statistics and mathematics, linear least squares is an approach fitting a mathematical or statistical model to data in cases where the idealized value provided by the model for any data point is expressed linearly in terms of the unknown parameters of the model. The resulting fitted model can be used to summarize the data, to predict unobserved values from the same system, and to understand the mechanisms that may underlie the system.

      Mathematically, linear least squares is the problem of approximately solving an overdetermined system of linear equations, where the best approximation is defined as that which minimizes the sum of squared differences between the data values and their corresponding modeled values. The approach is called linear least squares since the assumed function is linear in the parameters to be estimated. Linear least squares problems are convex and have a closed-form solution that is unique, provided that the number of data points used for fitting equals or exceeds the number of unknown parameters, except in special degenerate situations. In contrast, non-linear least squares problems generally must be solved by an iterative procedure, and the problems can be non-convex with multiple optima for the objective function. If prior distributions are available, then even an underdetermined system can be solved using the Bayesian MMSE estimator.

      In statistics, linear least squares problems correspond to a particularly important type of statistical model called linear regression which arises as a particular form of regression analysis. One basic form of such a model is an ordinary least squares model. The present article concentrates on the mathematical aspects of linear least squares problems, with discussion of the formulation and interpretation of statistical regression models and statistical inferences related to these being dealt with in the articles just mentioned. See outline of regression analysis for an outline of the topic.

2017b

  • (Fong & Saunders, ) ⇒ David Fong & Michael Saunders. “LSMR: Sparse Equations and Least Squares" http://web.stanford.edu/group/SOL/software/lsmr/ Retrieved: 2017-08-27
    • QUOTE: Implementation of a conjugate-gradient type method for solving sparse linear equations and sparse least-squares problems: [math]\displaystyle{ \begin{align*} \text{Solve } & Ax=b \\ \text{or minimize } & \|Ax-b\|^2 \\ \text{or minimize } & \|Ax-b\|^2 + \lambda^2 \|x\|^2 \end{align*} }[/math]

      where the matrix [math]\displaystyle{ A }[/math] may be square or rectangular (over-determined or under-determined), and may have any rank. It is represented by a routine for computing [math]\displaystyle{ Av }[/math] and [math]\displaystyle{ A^T u }[/math] for given vectors [math]\displaystyle{ v }[/math] and [math]\displaystyle{ u }[/math].

      The scalar [math]\displaystyle{ \lambda }[/math] is a damping parameter. If [math]\displaystyle{ \lambda \gt 0 }[/math], the solution is “regularized” in the sense that a unique solution always exists, and [math]\displaystyle{ \|x\| }[/math] is bounded.

      The method is based on the Golub-Kahan bidiagonalization process. It is algebraically equivalent to applying MINRES to the normal equation [math]\displaystyle{ (A^T A + \lambda^2 I) x = A^T b, }[/math] but has better numerical properties, especially if [math]\displaystyle{ A }[/math] is ill-conditioned.