Word Vector Space
(Redirected from Word Embedding Space)
Jump to navigation
Jump to search
A Word Vector Space is a vector space composed of word vectors.
- AKA: Lexical Item Vector Model, Token Feature Space.
- Context:
- It can be (typically) defined by a Word Vector Space Model (based on some corpus statistics).
- It can range from being a Binary Word Vector Space, to being a Non-Negative Integer Word Vector Space to being a Continuous Word Vector Space.
- It can be produced by a Word Vector Space Creation Task (solved by a Word Vector Space Creation System)
- Example(s):
- the vector space associated with the 20 Newsgroups Dataset.
- …
- Counter-Example(s):
- See: Bag-of-Words Vector Space.
References
2013
- https://code.google.com/p/word2vec/
- This tool provides an efficient implementation of the continuous bag-of-words and skip-gram architectures for computing vector representations of words. These representations can be subsequently used in many natural language processing applications and for further research.