Reflective Random Indexing Algorithm
(Redirected from RRI)
Jump to navigation
Jump to search
A Reflective Random Indexing Algorithm is a dimensionality compression algorithm based on RI that ...
References
2014
- https://code.google.com/p/semanticvectors/
- QUOTE: ... The models are created by applying concept mapping algorithms to term-document matrices created using Apache Lucene. The concept mapping algorithms supported by the package include Random Projection, Latent Semantic Analysis (LSA) and Reflective Random Indexing. ...
- https://code.google.com/p/semanticvectors/wiki/ReflectiveRandomIndexing
- Reflective Random Indexing (RRI) is the name given to the process of training a semantic model in several phases, starting with RandomProjection and then using several TrainingCycles.
See Reflective Random Indexing and indirect inference: a scalable method for discovery of implicit connections. RRI has been used to infer accurate connections between concepts that do not occur together explicitly in text - like LSA, but at a much smaller computational cost.
- Reflective Random Indexing (RRI) is the name given to the process of training a semantic model in several phases, starting with RandomProjection and then using several TrainingCycles.
2010
- (Cohen et al., 2010) ⇒ Trevor Cohen, Roger Schvaneveldt, and Dominic Widdows. (2010). “Reflective Random Indexing and indirect inference: A scalable method for discovery of implicit connections.” In: Journal of Biomedical Informatics, 43(2).
- QUOTE: The discovery of implicit connections between terms that do not occur together in any scientific document underlies the model of literature-based knowledge discovery first proposed by Swanson. Corpus-derived statistical models of semantic distance such as Latent Semantic Analysis (LSA) have been evaluated previously as methods for the discovery of such implicit connections. However, LSA in particular is dependent on a computationally demanding method of dimension reduction as a means to obtain meaningful indirect inference, limiting its ability to scale to large text corpora. In this paper, we evaluate the ability of Random Indexing (RI), a scalable distributional model of word associations, to draw meaningful implicit relationships between terms in general and biomedical language. Proponents of this method have achieved comparable performance to LSA on several cognitive tasks while using a simpler and less computationally demanding method of dimension reduction than LSA employs. In this paper, we demonstrate that the original implementation of RI is ineffective at inferring meaningful indirect connections, and evaluate Reflective Random Indexing (RRI), an iterative variant of the method that is better able to perform indirect inference. RRI is shown to lead to more clearly related indirect connections and to outperform existing RI implementations in the prediction of future direct co-occurrence in the MEDLINE corpus.