Distributional Word Vectorizing Function
(Redirected from word embedding model)
Jump to navigation
Jump to search
A Distributional Word Vectorizing Function is a word vectorizing function that is a distributional text-item vectorizing function (which maps a word into a distributional word vector from a distributional word vector space).
- AKA: Lexical Distributed Representation.
- Context:
- It can be created by a Distributional Word Vectorizing Function Creation System (that solves a Distributional Word Vectorizing Function Creation Task).
- It can define a Distributional Word Vector Space.
- It can range from being a Continuous Distributional Word Vectorizing Function to being a Discrete Distributional Word Vectorizing Function.
- It can range from being a Dense Distributional Word Vectorizing Function to being a Sparse Distributional Word Vectorizing Function.
- …
- Example(s):.
- a word2vec word vectorizing model.
- one created by SemanticVectors.
- …
- Counter-Example(s):
- See: Word Embeddings, Vector Space, Latent Concept.
References
2015
- (Vilnis & McCallum, 2015) ⇒ Luke Vilnis, and Andrew McCallum. (2015). “Word Representations via Gaussian Embedding.” In: arXiv preprint arXiv:1412.6623 submitted to ICRL 2015.
- QUOTE: Current work in lexical distributed representations maps each word to a point vector in low-dimensional space. Mapping instead to a density provides many interesting advantages, including better capturing uncertainty about a representation and its relationships, expressing asymmetries more naturally than dot product or cosine similarity, and enabling more expressive parameterization of decision boundaries.
2013
- https://code.google.com/p/semanticvectors/wiki/Ideas
- QUOTE: There are many reasons for investigating the relationships between semantic vector models and formal ontology models including taxonomies, conceptual graphs, Cyc, RDF, Wikipedia relationships, etc.