Deep Semantic Similarity Neural Network (DSSNN)
(Redirected from Deep Semantic Similarity Neural Network)
Jump to navigation
Jump to search
A Deep Semantic Similarity Neural Network (DSSNN) is a Deep Neural Network that is a semantic similarity network that outputs the similarity score of paired data.
- AKA: Deep Semantic Similarity Network, Deep Similarity Network.
- Context:
- It was developed by Minaee & Liu, 2017 inspired by Siamese Networks.
- Example(s):
- Counter-Example(s):
- See: Artificial Neural Network, Deep Learning Neural Network, Natural Language Processing Task, Costumer Care Chat System.
References
2021
- (Wikipedia, 2021) ⇒ https://en.wikipedia.org/wiki/Semantic_similarity_network Retrieved:2021-7-30.
- A semantic similarity network (SSN) is a special form of semantic network . [1] designed to represent concepts and their semantic similarity. Its main contribution is reducing the complexity of calculating semantic distances. Bendeck (2004, 2008) introduced the concept of semantic similarity networks (SSN) as the specialization of a semantic network to measure semantic similarity from ontological representations. [2] Implementations include genetic information handling. The concept is formally defined (Bendeck 2008) as a directed graph, with concepts represented as nodes and semantic similarity relations as edges.[3] The relationships are grouped into relation types. The concepts and relations contain attribute values to evaluate the semantic similarity [4] between concepts. The semantic similarity relationships of the SSN represent several of the general relationship types of the standard Semantic network, reducing the complexity of the (normally, very large) network for calculations of semantics. SSNs define relation types as templates (and taxonomy of relations) for semantic similarity attributes that are common to relations of the same type. SSN representation allows propagation algorithms to faster calculate semantic similarities, including stop conditions within a specified threshold. This reduces the computation time and power required for calculation.
- ↑ R. H. Richens: "General program for mechanical translation between any two languages via an algebraic interlingua". Cambridge Language Research Unit. Mechanical Translation, November 1956; p. 37
- ↑ Fawsy Bendeck, Three Fold "Ontology + Model + Instance (OMI) - Semantic Unification Process, In International Conference on Advances in Internet, Processing, System and Interdisciplinary Research (IPSI-2004), Stockholm, Sep 2004, .
- ↑ Bendeck, F. (2008). WSM-P Workflow Semantic Matching Platform, PhD dissertation, University of Trier, Germany. Verlag Dr. Hut. ASIN 3899638549
- ↑ P. Resnik. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. Proc. the 14th International Joint Conference on Artificial Intelligence, 448–453, 1995.
2017
- (Minaee & Liu, 2017) ⇒ Shervin Minaee, and Zhu Liu (2017). "Automatic question-answering using a deep similarity neural network". arXiv preprint arXiv:1708.01713.
- QUOTE: After extracting features we need to train a model which takes a pair of question and answer, and outputs a score that shows the properness of that answer for the given question. There are different ways to achieve this goal. In a very simple way one could concatenate the doc2vec features of question and answer and train a classifier on top of that which predicts the probability of matching. In this work, inspired by Siamese network by Lecun and colleagues 22-23, we propose a deep similarity network that takes the features for a pair of question and answer and feed them into two parallel neural networks, and combines them after a few layers of transformation to make decision. The block diagram of this model is shown in Figure 4.
Fig. 4. The block-diagram of the proposed similarity network.
- QUOTE: After extracting features we need to train a model which takes a pair of question and answer, and outputs a score that shows the properness of that answer for the given question. There are different ways to achieve this goal. In a very simple way one could concatenate the doc2vec features of question and answer and train a classifier on top of that which predicts the probability of matching. In this work, inspired by Siamese network by Lecun and colleagues 22-23, we propose a deep similarity network that takes the features for a pair of question and answer and feed them into two parallel neural networks, and combines them after a few layers of transformation to make decision. The block diagram of this model is shown in Figure 4.