Count-based Distributional Word Vector
Jump to navigation
Jump to search
A Count-based Distributional Word Vector is a Word Embedding that are used in a predictive Distributional Semantic Model.
- AKA: Count Vector.
- Context:
- …
- Example(s):
- Counter-Example(s):
- See: Context-Predicting Model, Count-based Predicting Model, WordSim353 Benchmark Task, Semantic Representation.
References
2014
- (Baroni et al., 2014) ⇒ Marco Baroni, Georgiana Dinu, and Germán Kruszewski. (2014). “Don't Count, Predict! a Systematic Comparison of Context-counting Vs. Context-predicting Semantic Vectors.” In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014)
- QUOTE: Whatever the reasons, we know of just three works reporting direct comparisons, all limited in their scope. Huang et al. (2012) compare, in passing, one count model and several predict DSMs on the standard WordSim353 benchmark (Table 3 of their paper). In this experiment, the count model actually outperforms the best predictive approach. Instead, in a word-similarity-in-context task (Table 5), the best predict model outperforms the count model, albeit not by a large margin.
Blacoe and Lapata (2012) compare count and predict representations as input to composition functions. Count vectors make for better inputs in a phrase similarity task, whereas the two representations are comparable in a paraphrase classification experiment [1].
- QUOTE: Whatever the reasons, we know of just three works reporting direct comparisons, all limited in their scope. Huang et al. (2012) compare, in passing, one count model and several predict DSMs on the standard WordSim353 benchmark (Table 3 of their paper). In this experiment, the count model actually outperforms the best predictive approach. Instead, in a word-similarity-in-context task (Table 5), the best predict model outperforms the count model, albeit not by a large margin.
- ↑ We refer here to the updated results reported in the erratum at http://homepages.inf.ed.ac.uk/s1066731/pdf/emnlp2012erratum.pdf