Matrix Decomposition Algorithm
(Redirected from matrix factorization method)
Jump to navigation
Jump to search
A Matrix Decomposition Algorithm is a matrix processing algorithm that can be applied by a matrix decomposition system (to solve a matrix decomposition task).
- AKA: Factorization Method.
- Context:
- It can range from being an Exact Matrix Decomposition Algorithm to being an Approximate Matrix Decomposition Algorithm\.
- It can range from being a Generic Matrix Decomposition Machine to being a Specific Matrix Decomposition Algorithm.
- It can range from being a Global Matrix Decomposition Algorithm to being a Local Matrix Decomposition Algorithm.
- It can range from being a Nonnegative Matrix Factorization Algorithm to being a Positive Matrix Factorization Algorithm to being ...
- It can range from being a Regularized Matrix Factorization Algorithm to being a Non-Regularized Matrix Factorization Algorithm.
- It can range from being a Weighted Matrix Decomposition Algorithm to being ...
- It can range from being a Boolean Matrix Decomposition Algorithm to being an Integer Matrix Decomposition Algorithm to being Real Matrix Decomposition Algorithm.
- It can range from being a Static Matrix Decomposition Algorithm to being a Dynamic Matrix Decomposition Algorithm.
- It can be a Low-Rank Factorization Algorithm.
- Example(s):
- Triangular Factorization Algorithm (for triangular factorization), such as LU decomposition algorithm.
- Orthogonal Factorization Algorithm (for orthogonal factorizations)
- Singular Value Decomposition Algorithm (for singular value decomposition)
- QR Decomposition Algorithm (for QR decomposition).
- Eigen Decomposition Algorithm (for eigen decomposition).
- Polar Decomposition Algorithm (for polar decomposition).
- PCA Algorithm (for principal components decomposition).
- a Matrix Factorization-based Item Recommendation Algorithm.
- …
- Counter-Example(s):
- See: Supervised Matrix Factorization Algorithm, Latent Factor Model Fitting.
References
2019
- "2.5. Decomposing signals in components (matrix factorization problems)." In: scikit-learn documentation.
2.5. Decomposing signals in components (matrix factorization problems) 2.5.1. Principal component analysis (PCA) 2.5.1.1. Exact PCA and probabilistic interpretation 2.5.1.2. Incremental PCA 2.5.1.3. PCA using randomized SVD 2.5.1.4. Kernel PCA 2.5.1.5. Sparse principal components analysis (SparsePCA and MiniBatchSparsePCA) 2.5.2. Truncated singular value decomposition and latent semantic analysis 2.5.3. Dictionary Learning 2.5.3.1. Sparse coding with a precomputed dictionary 2.5.3.2. Generic dictionary learning 2.5.3.3. Mini-batch dictionary learning 2.5.4. Factor Analysis 2.5.5. Independent component analysis (ICA) 2.5.6. Non-negative matrix factorization (NMF or NNMF) 2.5.6.1. NMF with the Frobenius norm 2.5.6.2. NMF with a beta-divergence 2.5.7. Latent Dirichlet Allocation (LDA)
2016
- (Bayer, 2016) ⇒ Immanuel Bayer. (2016). “fastFM: A Library for Factorization Machines.” In: The Journal of Machine Learning Research, 17(1).
- QUOTE: Factorization Machines (FM) are currently only used in a narrow range of applications and are not yet part of the standard machine learning toolbox, despite their great success in collaborative filtering and click-through rate prediction. However, Factorization Machines are a general model to deal with sparse and high dimensional features.
2008
- (Salakhutdinov & Mnih, 2008) ⇒ Ruslan Salakhutdinov, and Andriy Mnih. (2008). “Bayesian probabilistic matrix factorization using Markov chain Monte Carlo.” In: Proceedings of the 25th International Conference on Machine learning (ICML 2008).
2003
- (Xu et al., 2003) ⇒ Wei Xu, Xin Liu, and Yihong Gong. (2003). “Document Clustering Based on Non-Negative Matrix Factorization.” In: Proceedings of the 26th ACM SIGIR Conference (SIGIR 2003). doi:10.1145/860435.860485