Linear Least-Squares L1-Regularized Regression Task

From GM-RKB
Jump to navigation Jump to search

A Linear Least-Squares L1-Regularized Regression Task is a regularized linear least-squares regression that is based on L1 regularization and feature selection task.



References

2017a

  • (Zhang, 2017) ⇒ Xinhua Zhang (2017). “Regularization" in “Encyclopedia of Machine Learning and Data Mining” (Sammut & Webb, 2017) pp 1083 - 1088 ISBN: 978-1-4899-7687-1, DOI: 10.1007/978-1-4899-7687-1_718
    • QUOTE: A common approach to regularization is to penalize a model by its complexity measured by some real-valued function, e.g., a certain “norm” of [math]\displaystyle{ \mathbf{w} }[/math]. We list some examples below.

      L1 regularization

      L1 regularizer, [math]\displaystyle{ \left \|\mathbf{w}\right \|_{1} :=\sum _{i}\left \vert w_{i}\right \vert }[/math], is a popular approach to finding sparse models, i.e., only a few components of [math]\displaystyle{ \mathbf{w} }[/math] are nonzero, and only a corresponding small number of features are relevant to the prediction. A well-known example is the LASSO algorithm (Tibshirani,1996), which uses a L1-regularized least square:

      [math]\displaystyle{ \displaystyle{\min _{\mathbf{w}\in \mathbb{R}^{p}}\left \|X^{\top }\mathbf{w} -\mathbf{ y}\right \|^{2} +\lambda \left \|\mathbf{w}\right \|_{ 1}.} }[/math].

2017b

2017c

2017d

  • (Scikit-Learn, 2017) ⇒ "1.1.3. Lasso" http://scikit-learn.org/stable/modules/linear_model.html#lasso
    • QUOTE: The Lasso is a linear model that estimates sparse coefficients. It is useful in some contexts due to its tendency to prefer solutions with fewer parameter values, effectively reducing the number of variables upon which the given solution is dependent. For this reason, the Lasso and its variants are fundamental to the field of compressed sensing. Under certain conditions, it can recover the exact set of non-zero weights (see Compressive sensing: tomography reconstruction with L1 prior (Lasso)).

      Mathematically, it consists of a linear model trained with [math]\displaystyle{ \ell_1 }[/math] prior as regularizer. The objective function to minimize is:

      [math]\displaystyle{ \underset{w}{min\,} { \frac{1}{2n_{samples}} ||X w - y||_2 ^ 2 + \alpha ||w||_1} }[/math]

      The lasso estimate thus solves the minimization of the least-squares penalty with [math]\displaystyle{ \alpha ||w||_1 }[/math] added, where [math]\displaystyle{ \alpha }[/math] is a constant and [math]\displaystyle{ ||w||_1 }[/math] is the [math]\displaystyle{ \ell_1 }[/math]-norm of the parameter vector.


  1. 1.0 1.1 Tibshirani, Robert. 1996. “Regression Shrinkage and Selection via the lasso”. Journal of the Royal Statistical Society. Series B (methodological) 58 (1). Wiley: 267–88. http://www.jstor.org/stable/2346178.
  2. Breiman, Leo. 1995. “Better Subset Regression Using the Nonnegative Garrote”. Technometrics 37 (4). Taylor & Francis, Ltd.: 373–84. doi:10.2307/1269730.
  3. Tibshirani, Robert. 1997. “The lasso Method for Variable Selection in the Cox Model". Statistics in Medicine, Vol. 16, 385—395 (1997)