2012 OnlineLearningtoDiversifyfromIm

From GM-RKB
Jump to navigation Jump to search

Subject Headings:

Notes

Cited By

Quotes

Author Keywords

Abstract

In order to minimize redundancy and optimize coverage of multiple user interests, search engines and recommender systems aim to diversify their set of results. To date, these diversification mechanisms are largely hand-coded or relied on expensive training data provided by experts. To overcome this problem, we propose an online learning model and algorithms for learning diversified recommendations and retrieval functions from implicit feedback. In our model, the learning algorithm presents a ranking to the user at each step, and uses the set of documents from the presented ranking, which the user reads, as feedback. Even for imperfect and noisy feedback, we show that the algorithms admit theoretical guarantees for maximizing any submodular utility measure under approximately rational user behavior. In addition to the theoretical results, we find that the algorithm learns quickly, accurately, and robustly in empirical evaluations on two datasets.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2012 OnlineLearningtoDiversifyfromImThorsten Joachims
Karthik Raman
Pannaga Shivaswamy
Online Learning to Diversify from Implicit Feedback10.1145/2339530.23396422012