Singular Value Decomposition (SVD) Task
A Singular Value Decomposition (SVD) Task is a matrix decomposition task where matrix [math]\displaystyle{ M }[/math] requires an SVD decomposition, [math]\displaystyle{ UΣV^T }[/math] (with [math]\displaystyle{ U }[/math] being an orthonormal matrix, [math]\displaystyle{ \Sigma }[/math] being a nonnegative diagonal matrix, and [math]\displaystyle{ V^T }[/math] being an orthonormal matrix.)
- Context:
- Input: a Matrix (real or complex).
- It can be solved by an SVD System (that implements an SVD algorithm).
- It can result, when [math]\displaystyle{ M }[/math] is a square matrix with positive determinant, with: 1) U, V*, and Σ also being [math]\displaystyle{ m×m }[/math] matrices; 2) Σ as a scaling matrix, and 3) [math]\displaystyle{ U }[/math] and [math]\displaystyle{ V^* }[/math] as rotation matrices.
- It can discover a Least-Squares Reduced-Rank Approximation of a Matrix.
- It can support:
- an SVD-based Compression Task, such as for image compression;
- a Polar Decomposition Task;
- a Linear Equations System Solving Task (especially for underdetermined systems);
- …
- Example(s):
- [math]\displaystyle{ \operatorname{SVD}\left(\begin{bmatrix} 1 & 0 & 0 & 0 & 2 \\ 0 & 0 & 3 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 4 & 0 & 0 & 0 \end{bmatrix} \right) \Rightarrow \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 1 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 4 & 0 & 0 & 0 & 0 \\ 0 & 3 & 0 & 0 & 0 \\ 0 & 0 & \sqrt{5} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ \sqrt{0.2} & 0 & 0 & 0 & \sqrt{0.8} \\ 0 & 0 & 0 & 1 & 0 \\ -\sqrt{0.8} & 0 & 0 & 0 & \sqrt{0.2} \end{bmatrix} }[/math]
- [math]\displaystyle{ \operatorname{SVD}\left(\begin{bmatrix}4 & 3\\8 & 6\end{bmatrix}\right) \Rightarrow \begin{bmatrix}\frac{1}{\sqrt{5}} & \frac{2}{\sqrt{5}}\\ \frac{2}{\sqrt{5}} & \frac{-1}{\sqrt{5}}\end{bmatrix} \begin{bmatrix}\sqrt{125} & 0\\0 & 0\end{bmatrix} \begin{bmatrix}0.8 & 0.6\\0.6 & -0.8\end{bmatrix} }[/math]
- Counter-Example(s):
- See: Tensor Rank Decomposition, Weighted Matrix Factorization, Feature Hashing, High-Order SVD, Singular Value, Linear Least-Squares, Rectangular Diagonal Matrix, Conjugate Transpose.
References
2015
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/singular_value_decomposition Retrieved:2015-3-1.
- In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It has many useful applications in signal processing and statistics.
Formally, the singular value decomposition of an m × n real or complex matrix M is a factorization of the form M = UΣV∗, where U is an m × m real or complex unitary matrix, Σ is an m × n rectangular diagonal matrix with non-negative real numbers on the diagonal, and V∗ (the conjugate transpose of V, or simply the transpose of V if V is real) is an n × n real or complex unitary matrix. The diagonal entries Σi,i of Σ are known as the singular values of M. The m columns of U and the n columns of V are called the left-singular vectors and right-singular vectors of M, respectively.
The singular value decomposition and the eigendecomposition are closely related. Namely:
- The left-singular vectors of M are eigenvectors of MM∗.
- The right-singular vectors of M are eigenvectors of M∗M.
- The non-zero singular values of M (found on the diagonal entries of Σ) are the square roots of the non-zero eigenvalues of both M∗M and MM∗.
- Applications that employ the SVD include computing the pseudoinverse, least squares fitting of data, multivariable control, matrix approximation, and determining the rank, range and null space of a matrix.
- In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It has many useful applications in signal processing and statistics.
2009
- http://en.wiktionary.org/wiki/singular_value_decomposition
- QUOTE: (linear algebra) A particular type of factorisation of a matrix into a product of three matrices, of which the second is a diagonal matrix that has as the entries on its diagonal the singular values of the original matrix.
2001
- (Luo & Hancock, 2001) ⇒ Bin Luo and Edwin R. Hancock. (2001). “Structural Graph Matching Using the EM Algorithm and Singular Value Decomposition.” In: IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(10).
1997
- (Landauer & Dumais, 1997) ⇒ Thomas K. Landauer, and Susan T. Dumais. (1997). “A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge..” In: Psychological review, 104(2). doi:10.1037/0033-295X.104.2.211
- QUOTE: A brief overview the mathematics of SVD is given in the appendix. For those who wish to skip it, we note that SVD is the general method for linear decomposition of a matrix into independent principal components of which factor analysis is the special case for square matrices with the same entities as columns and rows. Factor analysis finds a parsimonious representation of all the intercorrelations between a set of variables in terms of a new set of variables each of which is unrelated to any other but which can be combined to regenerate the original data. SVD does the same thing for an arbitrarily shaped rectangular matrix in which the columns and rows stand for different things, as in the present case one stands for words, the other for contexts in which the words appear. (For those with yet other vocabularies, SVD is a form of Eigenvalue-Eigenvector analysis or principal components decomposition and, in a more general sense, of multi-dimensional scaling. See Carroll & Arabie, in press.).
1990
- (Deerwester et al, 1990) ⇒ Scott Deerwester, Susan T. Dumais, George W. Furnas, and Thomas K. L, Richard Harshman. (1990). “Indexing by Latent Semantic Analysis.” In: Journal of the American Society for Information Science
1981
- (Gerbrands, 1981) ⇒ Jan J. Gerbrands. (1981) "On the Relationships Between SVD, KLT and PCA.” In: Pattern Recognition, 14(1). doi:10.1016/0031-3203(81)90082-0
- ABSTRACT: In recent literature on digital image processing much attention is devoted to the singular value decomposition (SVD) of a matrix. Many authors refer to the Karhunen-Loeve transform (KLT) and principal components analysis (PCA) while treating the SVD. In this paper we give definitions of the three transforms and investigate their relationships. It is shown that in the context of multivariate statistical analysis and statistical pattern recognition the three transforms are very similar if a specific estimate of the column covariance matrix is used. In the context of two-dimensional image processing this similarity still holds if one single matrix is considered. In that approach the use of the names KLT and PCA is rather inappropriate and confusing. If the matrix is considered to be a realization of a two-dimensional random process, the SVD and the two statistically defined transforms differ substantially.