2009 BPRBayesianPersonalizedRankingf
- (Rendle et al., 2009) ⇒ Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. (2009). “BPR: Bayesian Personalized Ranking from Implicit Feedback.” In: Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. ISBN:978-0-9749039-5-8
Subject Headings: Bayesian Personalized Ranking Algorithm, Weighted Regularized Matrix Factorization.
Notes
Cited By
- http://scholar.google.com/scholar?q=%222009%22+BPR%3A+Bayesian+Personalized+Ranking+from+Implicit+Feedback
- http://dl.acm.org/citation.cfm?id=1795114.1795167&preflayout=flat#citedby
2014
- (Rendle & Freudenthaler, 2014) ⇒ Steffen Rendle, and Christoph Freudenthaler. (2014). “Improving Pairwise Learning for Item Recommendation from Implicit Feedback.” In: Proceedings of the 7th ACM International Conference on Web search and data mining. ISBN:978-1-4503-2351-2 doi:10.1145/2556195.2556248
2011
- (Ning & Karypis, 2011) ⇒ Xia Ning, and George Karypis. (2011). “SLIM: Sparse Linear Methods for Top-N Recommender Systems.” In: Proceedings of the 2011 IEEE 11th International Conference on Data Mining. ISBN:978-0-7695-4408-3 doi:10.1109/ICDM.2011.134
Quotes
Abstract
Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion.
…
5 Relations to other methods
We discuss the relations of our proposed methods for ranking to two further item prediction models.
5.1 Weighted Regularized Matrix Factorization (WR-MF)
Both Pan et al., 2008 and Hu et al., 2008 have presented a matrix factorization method for item prediction from implicit feedback. Thus the model class is the same as we described in Section 4.3.1, i.e. [math]\displaystyle{ \hat{X} := WH^t }[/math] with the matrices [math]\displaystyle{ W : |U| \times k }[/math] and [math]\displaystyle{ H : |U| \times k }[/math]. The optimization criterion and learning method differ substantially from our approach. Their method is an adaption of a SVD, which minimizes the square-loss. Their extensions are regularization to prevent overfitting and weights in the error function to increase the impact of positive feedback. In total their optimization criterion is: [math]\displaystyle{ \Sigma_{u \in U} \Sigma_{i \in I} \ C_{ui}( \langle w_u, h_i \rangle - 1)^2 + \lambda \|{W}\|^2_f + \lambda \|{H}|^2_f }[/math] where [math]\displaystyle{ c_{ui} }[/math] are not model parameters but apriori given weights for each tuple [math]\displaystyle{ (u,i) }[/math]. Hu et al. have additional data to estimate cui for positive feedback and they set [math]\displaystyle{ c_{ui} = 1 }[/math] for the rest. Pan et al. suggest to set [math]\displaystyle{ c_{ui} = 1 }[/math] for positive feedback and choose lower constants for the rest.
First of all, it is obvious that this optimization is on instance level (one item) instead of pair level (two items) as BPR. Apart from this, their optimization is a least-square which is known to correspond to the MLE for normally distributed random variables. However, the task of item prediction is actually not a regression (quantitative), but a classification (qualitative) one, so the logistic optimization is more appropriate.
A strong point of WR-MF is that it can be learned in O(iter (jSj k2+k3 (jIj+jUj))) provided that cui is constant for non-positive pairs. Our evaluation indicates that LearnBPR usually converges after a subsample of [math]\displaystyle{ m \cdot |S| }[/math] single update steps even though there are much more triples to learn from.
5.2 Maximum Margin Matrix Factorization (MMMF)
Weimer et al. [15] use the maximum margin matrix factorization method (MMMF) for ordinal ranking. Their MMMF is designed for scenarios with explicit feedback in terms of ratings. Even though their ranking MMMF is not intended for implicit feedback datasets, one could apply it in our scenario by giving all non-observed items the `rating' 0 and the observed ones a 1 (see Figure 1). With these modifications their optimization criterion to be minimized would be quite similar to BPR applied for matrix factorization: X (u;i;j)2Ds max(0; 1hwu; hi hji)+�wjjWjj2f +�hjjHjj2f
One difference is that the error functions differ { our hinge loss is smooth and motivated by the MLE. Additionally, our BPR-Opt criterion is generic and can be applied to several models, whereas their method is specific for MF.
Besides this, their learning method for MMMF differs from our generic approach LearnBPR. Their learning method is designed to work with sparse explicit data, i.e. they assume that there are many missing values and thus they assume to have much less pairs than in an implicit setting. But when their learning method is applied to implicit feedback datasets, the data has to be densiffed like described above and the number of training pairs DS is in O(jSj jIj). Our method LearnBPR can handle this situation by bootstrapping from DS (see Section 4.2).
…
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2009 BPRBayesianPersonalizedRankingf | George Karypis Steffen Rendle Lars Schmidt-Thieme Xia Ning Christoph Freudenthaler Zeno Gantner | BPR: Bayesian Personalized Ranking from Implicit Feedback | 2009 2014 |