Semantic Similarity Neural Network (SSNN)
A Semantic Similarity Neural Network (SSNN) is an Artificial Neural Network that is an edge-weighted graph where the nodes are concepts and each edge has an associated weight that represents paired nodes semantic similarity.
- AKA: Semantic Similarity Network (SSN).
- Context:
- It can range from being a Shallow Semantic Similarity Neural Network to being a Deep Semantic Similarity Neural Network.
- Example(s):
- Counter-Example(s):
- See: Semantic Similarity Measure, Taxonomy, Directed Graph, Semantic Word Similarity, Semantic Word Similarity Benchmark Task.
References
2021
- (Wikipedia, 2021) ⇒ https://en.wikipedia.org/wiki/Semantic_similarity_network Retrieved:2021-7-30.
- A semantic similarity network (SSN) is a special form of semantic network . [1] designed to represent concepts and their semantic similarity. Its main contribution is reducing the complexity of calculating semantic distances. Bendeck (2004, 2008) introduced the concept of semantic similarity networks (SSN) as the specialization of a semantic network to measure semantic similarity from ontological representations. [2] Implementations include genetic information handling. The concept is formally defined (Bendeck 2008) as a directed graph, with concepts represented as nodes and semantic similarity relations as edges.[3] The relationships are grouped into relation types. The concepts and relations contain attribute values to evaluate the semantic similarity [4] between concepts. The semantic similarity relationships of the SSN represent several of the general relationship types of the standard Semantic network, reducing the complexity of the (normally, very large) network for calculations of semantics. SSNs define relation types as templates (and taxonomy of relations) for semantic similarity attributes that are common to relations of the same type. SSN representation allows propagation algorithms to faster calculate semantic similarities, including stop conditions within a specified threshold. This reduces the computation time and power required for calculation.
- ↑ R. H. Richens: "General program for mechanical translation between any two languages via an algebraic interlingua". Cambridge Language Research Unit. Mechanical Translation, November 1956; p. 37
- ↑ Fawsy Bendeck, Three Fold "Ontology + Model + Instance (OMI) - Semantic Unification Process, In International Conference on Advances in Internet, Processing, System and Interdisciplinary Research (IPSI-2004), Stockholm, Sep 2004, .
- ↑ Bendeck, F. (2008). WSM-P Workflow Semantic Matching Platform, PhD dissertation, University of Trier, Germany. Verlag Dr. Hut. ASIN 3899638549
- ↑ P. Resnik. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. Proc. the 14th International Joint Conference on Artificial Intelligence, 448–453, 1995.
2017
- (Minaee & Liu, 2017) ⇒ Shervin Minaee, and Zhu Liu (2017). "Automatic question-answering using a deep similarity neural network". arXiv preprint arXiv:1708.01713.
- QUOTE: After extracting features we need to train a model which takes a pair of question and answer, and outputs a score that shows the properness of that answer for the given question. There are different ways to achieve this goal. In a very simple way one could concatenate the doc2vec features of question and answer and train a classifier on top of that which predicts the probability of matching. In this work, inspired by Siamese network by Lecun and colleagues 22-23, we propose a deep similarity network that takes the features for a pair of question and answer and feed them into two parallel neural networks, and combines them after a few layers of transformation to make decision. The block diagram of this model is shown in Figure 4.
- QUOTE: After extracting features we need to train a model which takes a pair of question and answer, and outputs a score that shows the properness of that answer for the given question. There are different ways to achieve this goal. In a very simple way one could concatenate the doc2vec features of question and answer and train a classifier on top of that which predicts the probability of matching. In this work, inspired by Siamese network by Lecun and colleagues 22-23, we propose a deep similarity network that takes the features for a pair of question and answer and feed them into two parallel neural networks, and combines them after a few layers of transformation to make decision. The block diagram of this model is shown in Figure 4.
2016
- (Francis-Landau et al., 2016) ⇒ Matthew Francis-Landau, Greg Durrett, and Dan Klein. (2016). “Capturing Semantic Similarity for Entity Linking with Convolutional Neural Networks.” In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2016).
- QUOTE: Our model focuses on two core ideas: first, that topic semantics at different granularities in a document are helpful in determining the genres of entities for entity linking, and second, that CNNs can distill a block of text into a meaningful topic vector.
(...) As shown in the middle of Figure 1, each feature in $f_C$ is a cosine similarity between a topic vector associated with the source document and a topic vector associated with the target entity. These vectors are computed by distinct CNNs operating over different subsets of relevant text.
- QUOTE: Our model focuses on two core ideas: first, that topic semantics at different granularities in a document are helpful in determining the genres of entities for entity linking, and second, that CNNs can distill a block of text into a meaningful topic vector.
2013
- (Guzzi et al.,2013) ⇒ Pietro Hiram Guzzi, Pierangelo Veltri, and Mario Cannataro (2013). "Thresholding of Semantic Similarity Networks Using a Spectral Graph-Based Technique". In: Proceedings of the Second International Workshop on New Frontiers in Mining Complex Patterns (NFMCP 2013/ECML-PKDD 2013).
- QUOTE: SSNs are edge-weighted graphs where the nodes are concepts (e.g. proteins) and each edge has an associated weight that represents the semantic similarity among related pairs of nodes(...)
As introduced, in a Semantic Similarity Networks, nodes represent proteins or genes, and edges represent the value of similarity among them. Starting from a dataset of genes or proteins, a SSN may be built in an iterative way, and once built, algorithms from graph theory may be used to extract topological properties that encode biological knowledge.
- QUOTE: SSNs are edge-weighted graphs where the nodes are concepts (e.g. proteins) and each edge has an associated weight that represents the semantic similarity among related pairs of nodes(...)
2011
- (Jiang et al., 2011) ⇒ Rui Jiang, Mingxin Gan, and Peng He (2011). "Constructing a Gene Semantic Similarity Network for the Inference of Disease Genes". In: BMC Systems Biology, 5(S-2). DOI:10.1186/1752-0509-5-S2-S2.
- QUOTE: The procedure of constructing a gene semantic similarity network is illustrated in Figure 1. First, we calculate pairwise semantic similarity scores for GO terms in the biological process domain, obtaining a matrix that contains semantic similarity scores between GO terms. Next, we calculate pairwise semantic similarity scores for human genes using similarity scores of GO terms and annotations of genes, obtaining a matrix that contains semantic similarity scores between genes. Then, we filter out low similarity values in this matrix by keeping only the first $k$ nearest neighbors for each gene and assigning zeros to all other elements. Finally, we obtain a gene semantic similarity network by treating non-zero elements in the resulting matrix as weights of edges between corresponding genes.