Vectorized Word Representation
(Redirected from word vector representation)
Jump to navigation
Jump to search
A Vectorized Word Representation is a text-item vector that refers to a word.
- AKA: Word-Vector, Word Embedding.
- Context:
- It can (typically) represent a Corpus Word.
- It can (typically) be a Contextual Word Vector, such as a distributional word vector.
- It can be produced by a Word Vectorization Function (typically based on some text corpus)
- It can be an input to a Word Vector-Input Function.
- It can range from being a Weighted Word Vector to being an Unweighted Word Vector.
- It can range from being a Sparse Word Vector to being a Dense Word Vector.
- It can range from being a Continuous Word Vector to being a Discrete Word Vector.
- It can be an input to an Data-Driven NLP Tasks, such as:
- Example(s):
- Distributional Continuous Dense Word Vector, such as:
[0.128,0.208,0.008]
⇐ f("Queen") - Bag-of-Words Vector, such as:
[0,0,1,...,0,0]
⇐ f("Queen") - Target-Word Context Window Vector, such as:
[0,17,0,...,1,0]
⇐ f("Queen") - Left-Right Context Window Word Counting Tuple, such as:
{targetTerm:newspaper, toTheLeft:{buys/1, this/1, every/2}, toTheRight:{day/2, read/1, a/1}}
- Distributional Continuous Dense Word Vector, such as:
- Counter-Example(s):
- See: Lexical Semantics Model.
References
2010
- (Turney & Pantel, 2010) ⇒ Peter D. Turney, and Patrick Pantel. (2010). “From Frequency to Meaning: Vector Space Models of Semantics.” In: Journal of Artificial Intelligence Research, 37(1).
2001
- (Turney, 2001) ⇒ Peter D. Turney. (2001). “Mining the Web for Synonyms: PMI-IR versus LSA on TOEFL.” In: Proceedings of the 12th European Conference on Machine Learning (ECML 2001)
1991
- (Miller & Charles, 1991) ⇒ G. Miller and W. Charles. (1991) "Contextual Correlates of Semantic Similarity.” In: Language and Cognitive Processes, 6(1).