QiLu University of Technology (QLUT) Semantic Word Similarity System
Jump to navigation
Jump to search
A QiLu University of Technology (QLUT) Semantic Word Similarity System is a Monolingual Semantic Word Similarity System that is based on a combination of a word embedding and knowledge-based system.
- Context:
- It was initially developed by Meng et al. (2017).
- It was a SemEval-2017 Task 2 participating system and best-performing system in a SemEval-2017 Task 2 (subtask 1) in the English monolingual SWS benchmark task.
- Example(s):
- Counter-Example(s):
- See: Semantic Similarity Task, Semantic Word Similarity Task, SemEval Task, Multilingual and Cross-lingual Semantic Word Similarity System, Word Embedding System.
References
2017a
- (Meng et al., 2017) ⇒ Fanqing Meng, Wenpeng Lu, Yuteng Zhang, Ping Jian, Shumin Shi, and Heyan Huang. (2017). “QLUT at SemEval-2017 Task 2: Word Similarity Based on Word Embedding and Knowledge Base.” In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval ACL 2017).
- QUOTE: In the subtask 1 (English monolingual word similarity) of this task, we have submitted two system runs, both of which are unsupervised. We mainly utilize the word embeddings method and the combined method.
The Figure 1 shows the framework of our system runs. In the top part of the figure,
word1
andword2
are the input of our systems. Run1 utilizes the word embeddings method. Run2 utilizes the combined method, which is based on the word embeddings and knowledge-based method.
- QUOTE: In the subtask 1 (English monolingual word similarity) of this task, we have submitted two system runs, both of which are unsupervised. We mainly utilize the word embeddings method and the combined method.
2017b
- (Camacho-Collados et al., 2017) ⇒ Jose Camacho-Collados, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. (2017). “SemEval-2017 Task 2: Multilingual and Cross-lingual Semantic Word Similarity.” In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval@ACL 2017).
- QUOTE: For English, the only system that came close to Luminoso was QLUT, which was the best-performing system that made use of the shared Wikipedia corpus for training. The best configuration of this system exploits the Skip-Gram model of Word2Vec with an additive compositional function for computing the similarity of multiwords. However, Mahtab and QLUT only performed their experiments in a single language (Farsi and English, respectively).