pLSI Model Training Algorithm
(Redirected from Probabilistic Latent Semantic Analysis)
Jump to navigation
Jump to search
A pLSI Model Training Algorithm is a latent semantic analysis algorithm that can be applied by a pLSI Model Training System (to solve a pLSI Model Training Task that generates a probabilistic latent semantic indexing model).
- AKA: pLSI Model Training Algorithm|pLSA, Probabilistic Latent Semantic Indexing, Probabilistic Latent Semantic Analysis Algorithm.
- Context:
- It was first ontrduced by Hofmann (1999).
- It can (typically) perform maximum likelihood estimation of model parameters).
- It can:
- select document [math]\displaystyle{ d }[/math] with a prior probability P(d),
- select a latent class [math]\displaystyle{ z }[/math] from document [math]\displaystyle{ d }[/math] with conditional probability P(z|d),
- select a word [math]\displaystyle{ w }[/math] from the latent class distribution with conditional probability P(w|z).
- It can (typically) not make assumptions about how the mixture of weights, P(z|d), are generated (its model is not at the document level) (Blei, Ng & Jordan, 2003)
- Example(s):
- that described in Hofmann (1999).
- …
- Counter-Example(s):
- See: Probabilistic Topic Model, Word-level Analysis.
References
2011
- (Wkipedia, 2011) ⇒ http://en.wikipedia.org/wiki/Probabilistic_latent_semantic_analysis
- Probabilistic latent semantic analysis (PLSA), also known as probabilistic latent semantic indexing (PLSI, especially in information retrieval circles) is a statistical technique for the analysis of two-mode and co-occurrence data. PLSA evolved from Latent semantic analysis, adding a sounder probabilistic model. PLSA has applications in information retrieval and filtering, natural language processing, machine learning from text, and related areas. It was introduced in 1999 by Jan Puzicha and Thomas Hofmann, and it is related to non-negative matrix factorization. Compared to standard latent semantic analysis which stems from linear algebra and downsizes the occurrence tables (usually via a singular value decomposition), probabilistic latent semantic analysis is based on a mixture decomposition derived from a latent class model. This results in a more principled approach which has a solid foundation in statistics. Considering observations in the form of co-occurrences [math]\displaystyle{ (w,d) }[/math] of words and documents, PLSA models the probability of each co-occurrence as a mixture of conditionally independent multinomial distributions: [math]\displaystyle{ P(w,d) = \sum_c P(c) P(d|c) P(w|c) = P(d) \sum_c P(c|d) P(w|c) }[/math] The first formulation is the symmetric formulation, where [math]\displaystyle{ w }[/math] and [math]\displaystyle{ d }[/math] are both generated from the latent class [math]\displaystyle{ c }[/math] in similar ways (using the conditional probabilities [math]\displaystyle{ P(d|c) }[/math] and [math]\displaystyle{ P(w|c) }[/math]), whereas the second formulation is the asymmetric formulation, where, for each document [math]\displaystyle{ d }[/math], a latent class is chosen conditionally to the document according to [math]\displaystyle{ P(c|d) }[/math], and a word is then generated from that class according to [math]\displaystyle{ P(w|c) }[/math].
2007
- (Steyvers & Griffiths, 2007) ⇒ Mark Steyvers, and Tom Griffiths. (2007). “Probabilistic Topic Models.” In: (Landauer et al., 2007).
2006
- (Ding et al., 2006) ⇒ Chris Ding, Tao Li, and Wei Peng. (2006). “Nonnegative Matrix Factorization and Probabilistic Latent Semantic Indexing: Equivalence Chi-Square Statistic, and a Hybrid Method.” In: Proceedings of AAAI 2006 (AAAI 2006).
2003
- (Blei, Ng & Jordan, 2003) ⇒ David M. Blei, Andrew Y. Ng , and Michael I. Jordan. (2003). “Latent Dirichlet Allocation.” In: The Journal of Machine Learning Research, 3.
1999
- (Hofmann, 1999) ⇒ Thomas Hofmann. (1999). “Probabilistic Latent Semantic Analysis.” In: Proceedings of UAI Conference (UAI 1999).