Distributional Semantic Modeling System
(Redirected from Distributional Word Vector Model Training System)
Jump to navigation
Jump to search
A Distributional Semantic Modeling System is a data-driven semantic modeling system/data-driven word vectorizing system that can solve a distributional word vectorizing function training task by implementing a Distributional Semantic Modeling Algorithm.
- AKA: Distributional Word Vectorizing Function Training System.
- Context:
- It can range from being a Dense Distributional Semantic Modeling System to being a Sparse Distributional Semantic Modeling System.
- It can range from being a Continuous Distributional Semantic Modeling System to being a Discrete Distributional Semantic Modeling System.
- …
- Example(s):
- Counter-Example(s):
- See: Supervised Word Classification System, Word Similarity Function Training System, Word-Space Model.
References
2014
- http://www.marekrei.com/blog/linguistic-regularities-word-representations/
- As the first step, we need to create feature vectors for each word in our vocabulary. The two main ways of doing this, which are also considered by this paper, are:
- BOW: The bag-of words approach. We count all the words that a certain word appears with, treat the counts as features in a vector, and weight them using something like positive pointwise mutual information (PPMI).
- word2vec: Vectors created using the word2vec toolkit. Low-dimensional dense vector representations are learned for each word using a neural network.
- If you want to learn more details about these models, take a look at an earlier post about comparing both of these models.
- As the first step, we need to create feature vectors for each word in our vocabulary. The two main ways of doing this, which are also considered by this paper, are:
2010
- Magnus Sahlgren. https://www.sics.se/~mange/research.html
- QUOTE: My research is focused on how semantic knowledge is acquired and represented in man and machine. In particular, I study the distributional approach to semantic knowledge acquisition, in which semantic information is extracted from cooccurrence statistics. The underlying idea is that meanings are correlated with the distributional patterns of linguistic entities.