Vector Space Mapping Function
(Redirected from vectorizing function)
Jump to navigation
Jump to search
A Vector Space Mapping Function is a mapping function that maps an object to a vector space model.
- AKA: Vectorizing Function.
- Example(s):
- a Word Vector Space Mapping Function (into a word vector space).
- …
- Counter-Example(s):
- See: Continuous Vector Space Mapping Function.
References
2015
- (Vilnis & McCallum, 2015) ⇒ Luke Vilnis, and Andrew McCallum. (2015). “Word Representations via Gaussian Embedding.” In: arXiv preprint arXiv:1412.6623 submitted to ICRL 2015.
- QUOTE: In recent years there has been a surge of interest in learning compact distributed representations or embeddings for many machine learning tasks, including collaborative filtering (Koren et al., 2009), image retrieval (Weston et al., 2011), relation extraction (Riedel et al., 2013), word semantics and language modeling (Bengio et al., 2006; Mnih & Hinton, 2008; Mikolov et al., 2013), and many others. In these approaches input objects (such as images, relations or words) are mapped to dense vectors having lower-dimensionality than the cardinality of the inputs, with the goal that the geometry of his low-dimensional latent embedded space be smooth with respect to some measure of similarity in the target domain. That is, objects associated with similar targets should be mapped to nearby points in the embedded space.
While this approach has proven powerful, representing an object as a single point in space carries some important limitations. An embedded vector representing a point estimate does not naturally express uncertainty about the target concepts with which the input may be associated. Point vectors are typically compared by dot products, cosine-distance or Euclean distance, none of which provide for asymmetric comparisons between objects
- QUOTE: In recent years there has been a surge of interest in learning compact distributed representations or embeddings for many machine learning tasks, including collaborative filtering (Koren et al., 2009), image retrieval (Weston et al., 2011), relation extraction (Riedel et al., 2013), word semantics and language modeling (Bengio et al., 2006; Mnih & Hinton, 2008; Mikolov et al., 2013), and many others. In these approaches input objects (such as images, relations or words) are mapped to dense vectors having lower-dimensionality than the cardinality of the inputs, with the goal that the geometry of his low-dimensional latent embedded space be smooth with respect to some measure of similarity in the target domain. That is, objects associated with similar targets should be mapped to nearby points in the embedded space.