Semantic Word Similarity Task
Jump to navigation
Jump to search
A Semantic Word Similarity Task is a semantic similarity task that is restricted to words.
- AKA: Semantic Word Similarity Machine Learning Task, Semantic Word Similarity Analysis Task, Semantic Word Similarity Modelling Task.
- Context:
- Task Input: Word Sequence.
- Task Output: Word Semantic Similarity Sore.
- Task Requirement(s):
- It can be solved by a Semantic Word Similarity System (that implements a Semantic Word Similarity Algorithm).
- It can range from being a Monolingual Semantic Word Similarity Task, to being a Multilingual Semantic Word Similarity Analysis Task, to being a Cross-lingual Semantic Word SimilarityTask.
- …
- Example(s):
- how similar are "Queen" and "King".
- a Semantic Word Similarity Benchmark Task.
- SEMEVAL 2012 Task-2, covering 79 distinct relation types. https://sites.google.com/site/semeval2012task2/download
- …
- Counter-Example(s):
- See: Word Embedding Task, Lexical Similarity Function, Semantic Textual Similarity Task, Natural Language Processing, Language Model.
References
2021
- (Chandrasekaran & Mago, 2021) ⇒ Dhivya Chandrasekaran, and Vijay Mago. (2021). “Evolution of Semantic Similarity - A Survey.” In: ACM Computing Surveys, 54(2).
- QUOTE: Semantic similarity methods usually give a ranking or percentage of similarity between texts, rather than a binary decision as similar or not similar. Semantic similarity is often used synonymously with semantic relatedness. However, semantic relatedness not only accounts for the semantic similarity between texts but also considers a broader perspective analyzing the shared semantic properties of two words. For example, the words ‘
coffee
’ and ‘mug
’ may be related to one another closely, but they are not considered semantically similar whereas the words ‘coffee
’ and ‘tea
’ are semantically similar. Thus, semantic similarity may be considered, as one of the aspects of semantic relatedness. The semantic relationship including similarity is measured in terms of semantic distance, which is inversely proportional to the relationship (...)
- QUOTE: Semantic similarity methods usually give a ranking or percentage of similarity between texts, rather than a binary decision as similar or not similar. Semantic similarity is often used synonymously with semantic relatedness. However, semantic relatedness not only accounts for the semantic similarity between texts but also considers a broader perspective analyzing the shared semantic properties of two words. For example, the words ‘
2017
- (SemEval, 2017) ⇒ SemEval-2017 Task 2: https://alt.qcri.org/semeval2017/task2/
- QUOTE: Semantic similarity is a core field of Natural Language Processing (NLP) which deals with measuring the extent to which two linguistic items are similar. In particular, the word semantic similarity framework is widely accepted as the most direct in-vitro evaluation of semantic vector space models (e.g., word embeddings) and in general semantic representation techniques. As a result, word similarity datasets play a major role in the advancement of research in lexical semantics. Given the importance of moving beyond the barriers of English language by developing language-independent techniques, the SemEval-2017 Task 2 provides a reliable framework for evaluating both monolingual and multilingual semantic representations, and similarity techniques.
2014
- (Pennington et al., 2014) ⇒ Jeffrey Pennington, Richard Socher, and Christopher D. Manning. (2014). “GloVe: Global Vectors for Word Representation.” In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014).