Manifold Dimensionality Compression Algorithm
A Manifold Dimensionality Compression Algorithm is a nonlinear dimensionality reduction algorithm that ...
- AKA: Manifold Learning Algorithm
- Context:
- It can range from being a Fully-Supervised Manifold Learning Algorithm to being a Semi-Supervised Manifold Learning Algorithm.
- Example(s):
- See: Locally Linear Embedding.
References
2012
- http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Manifold_learning_algorithms
- Some of the more prominent manifold learning algorithms are listed below (in approximately chronological order). An algorithm may learn an internal model of the data, which can be used to map points unavailable at training time into the embedding in a process often called out-of-sample extension.
- 3.1 Sammon's Mapping.
- 3.2 Kohonen Maps.
- 3.3 Principal curves and manifolds.
- 3.4 Autoencoders.
- 3.5 Gaussian process latent variable models.
- 3.6 Curvilinear component analysis.
- 3.7 Curvilinear Distance Analysis.
- 3.8 Diffeomorphic Dimensionality Reduction.
- 3.9 Kernel Principal Component Analysis.
- 3.10 Isomap.
- 3.11 Locally-Linear Embedding.
- 3.12 Laplacian Eigenmaps.
- 3.13 Diffusion Maps.
- 3.14 Hessian LLE.
- 3.15 Modified LLE.
- 3.16 Local Tangent Space Alignment.
- 3.17 Local Multidimensional Scaling.
- 3.18 Maximum Variance Unfolding.
- 3.19 Nonlinear PCA.
- 3.20 Data-Driven High Dimensional Scaling.
- 3.21 Manifold Sculpting.
- 3.22 RankVisu.
- 3.23 Topologically Constrained Isometric Embedding.
- 3.24 Relational Perspective Map
- Some of the more prominent manifold learning algorithms are listed below (in approximately chronological order). An algorithm may learn an internal model of the data, which can be used to map points unavailable at training time into the embedding in a process often called out-of-sample extension.
- (Ma & Fu, 2012) ⇒ Yunqian Ma, and Yun Fu, eds. (2012). “Manifold Learning Theory and Applications." CRC Press.
- BOOK OVERVIEW: rained to extract actionable information from large volumes of high-dimensional data, engineers and scientists often have trouble isolating meaningful low-dimensional structures hidden in their high-dimensional observations. Manifold learning, a groundbreaking technique designed to tackle these issues of dimensionality reduction, finds widespread application in machine learning, neural networks, pattern recognition, image processing, and computer vision.
Filling a void in the literature, Manifold Learning Theory and Applications incorporates state-of-the-art techniques in manifold learning with a solid theoretical and practical treatment of the subject. Comprehensive in its coverage, this pioneering work explores this novel modality from algorithm creation to successful implementation — offering examples of applications in medical, biometrics, multimedia, and computer vision. Emphasizing implementation, it highlights the various permutations of manifold learning in industry including manifold optimization, large scale manifold learning, semidefinite programming for embedding, manifold models for signal acquisition, compression and processing, and multi scale manifold.
Beginning with an introduction to manifold learning theories and applications, the book includes discussions on the relevance to nonlinear dimensionality reduction, clustering, graph-based subspace learning, spectral learning and embedding, extensions, and multi-manifold modeling. It synergizes cross-domain knowledge for interdisciplinary instructions, offers a rich set of specialized topics contributed by expert professionals and researchers from a variety of fields. Finally, the book discusses specific algorithms and methodologies using case studies to apply manifold learning for real-world problems.
- BOOK OVERVIEW: rained to extract actionable information from large volumes of high-dimensional data, engineers and scientists often have trouble isolating meaningful low-dimensional structures hidden in their high-dimensional observations. Manifold learning, a groundbreaking technique designed to tackle these issues of dimensionality reduction, finds widespread application in machine learning, neural networks, pattern recognition, image processing, and computer vision.
2010
- (Huh et al., 2010) ⇒ Seungil Huh, and Stephen E. Fienberg. (2010). “Discriminative Topic Modeling based on Manifold Learning.” In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2010). doi:10.1145/1835804.1835888
- QUOTE: Traditional manifold learning algorithms [17, 14, 2] have given way to recent graph-based semi-supervised learning algorithms [19, 18, 3]. The goal of manifold learning is to recover the structure of a given dataset by non-linear mapping into a low-dimensional space. As a manifold learning algorithm, Laplacian Eigenmaps [2] was developed based on spectral graph theory [8].
2009
- (Lafferty & Wasserman, 2009) ⇒ John D. Lafferty, and Larry Wasserman. (2009). “Statistical Machine Learning - Course: 10-702." Spring 2009, Carnegie Mellon Institute.
2004
- (Weinberger & Saul, 2004) ⇒ Kilian Q. Weinberger, and Lawrence K. Saul. (2004). “Unsupervised learning of image manifolds by semidefinite programming.” In: Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition (CVPR 2004). doi:10.1109/CVPR.2004.256
2001
- (Belkin & Niyogi, 2001) ⇒ M. Belkin, and P. Niyogi. (2001). “Laplacian eigenmaps and spectral techniques for embedding and clustering.” In: Advances in Neural Information Processing Systems, volume 14.
2000
- (Roweis & Saul, 2000) ⇒ Sam T. Roweis, and Lawrence K. Saul. (2000). “Nonlinear Dimensionality Reduction by Locally Linear Embedding." Science, 290(5500). doi:10.1126/science.290.5500.2323
- (Tenenbaum et al., 2000) ⇒ J. B. Tenenbaum, V. de Silva, and J. C. Langford. (2000). “A global geometric framework for nonlinear dimensionality reduction.” In: Science, 290.