sklearn.manifold Module
An sklearn.manifold Module is an sklearn module that contains a collection of Manifold Learning Systems that implement Data Embedding Algorithms.
- Context:
- It can (often) reference a sklearn.manifold system.
sklearn.manifold.Model_Name(self, arguments)
or simplysklearn.manifold.Model_Name()
where Model_Name is the name of the selected Data Embedding Algorithm.
- It can (often) reference a sklearn.manifold system.
- Example(s)
sklearn.manifold.Isomap
, a Manifold Learning System that implements an Isometric Mapping Algorithm.sklearn.manifold.LocallyLinearEmbedding
, a Manifold Learning System that implements a Locally-Linear Embedding Algorithm.sklearn.manifold.MDS
, a Manifold Learning System that implements Multidimensional Scaling Algorithm.sklearn.manifold.SpectralEmbedding
, a Manifold Learning System that implements Spectral Embedding Algorithm to solve a Nonlinear Dimensionality Reduction Task.sklearn.manifold.TSNE
, a Manifold Learning System that implements a t-distributed Stochastic Neighbor Embedding Algorithm.sklearn.manifold.locally_linear_embedding
, A Locally-Linear Embedding System.sklearn.manifold.smacof
, a Manifold Learning System that solves a Multidimensional Scaling Task using a SMACOF Algorithm.sklearn.manifold.spectral_embedding
, a Spectral Embedding System.- …
- Counter-Example(s):
sklearn.tree
, a collection of Decision Tree Learning Systems.sklearn.ensemble
, a collection of Decision Tree Ensemble Learning Systems.sklearn.metrics
, a collection of Metrics Subroutines.sklearn.covariance
,a collection of Covariance Estimators.sklearn.cluster.bicluster
, a collection of Spectral Biclustering Algorithms.sklearn.linear_model
, a collection of Linear Model Regression Systems.sklearn.neighbors
, a collection of K Nearest Neighbors Algorithms .sklearn.neural_network
, a collection of Neural Network Systems.
- See: Mapping Task, Scaling Task.
References
2017A
- (Scikit Learn, 2017) ⇒ http://scikit-learn.org/stable/modules/classes.html#module-sklearn.manifold Retrieved:2017-11-12
- QUOTE: The sklearn.manifold module implements data embedding techniques.
User guide: See the Manifold learning section for further details.
manifold.Isomap([n_neighbors, n_components, …])
, Isomap Embedding.manifold.LocallyLinearEmbedding([…])
, Locally Linear Embedding.manifold.MDS([n_components, metric, n_init, …])
, Multidimensional scaling.manifold.SpectralEmbedding([n_components, …])
, Spectral embedding for non-linear dimensionality reduction.manifold.TSNE([n_components, perplexity, …])
, t-distributed Stochastic Neighbor Embedding.manifold.locally_linear_embedding(X, …[, …])
, Perform a Locally Linear Embedding analysis on the data.manifold.smacof(dissimilarities[, metric, …])
, Computes multidimensional scaling using the SMACOF algorithm.manifold.spectral_embedding(adjacency[, …])
, Project the sample on the first eigenvectors of the graph Laplacian.
- QUOTE: The sklearn.manifold module implements data embedding techniques.
2017B
- (Scikit Learn, 2017) ⇒ http://scikit-learn.org/stable/modules/manifold.html Retrieved:2017-11-12
- QUOTE: Manifold learning is an approach to non-linear dimensionality reduction. Algorithms for this task are based on the idea that the dimensionality of many data sets is only artificially high.
(...)
High-dimensional datasets can be very difficult to visualize. While data in two or three dimensions can be plotted to show the inherent structure of the data, equivalent high-dimensional plots are much less intuitive. To aid visualization of the structure of a dataset, the dimension must be reduced in some way.
The simplest way to accomplish this dimensionality reduction is by taking a random projection of the data. Though this allows some degree of visualization of the data structure, the randomness of the choice leaves much to be desired. In a random projection, it is likely that the more interesting structure within the data will be lost.
To address this concern, a number of supervised and unsupervised linear dimensionality reduction frameworks have been designed, such as Principal Component Analysis (PCA), Independent Component Analysis, Linear Discriminant Analysis, and others. These algorithms define specific rubrics to choose an “interesting” linear projection of the data. These methods can be powerful, but often miss important non-linear structure in the data.
Manifold Learning can be thought of as an attempt to generalize linear frameworks like PCA to be sensitive to non-linear structure in data. Though supervised variants exist, the typical manifold learning problem is unsupervised: it learns the high-dimensional structure of the data from the data itself, without the use of predetermined classifications.
- QUOTE: Manifold learning is an approach to non-linear dimensionality reduction. Algorithms for this task are based on the idea that the dimensionality of many data sets is only artificially high.