2009 ConditionalModelsForNonSmo

From GM-RKB
Jump to navigation Jump to search

Subject Headings:

Notes

Cited By

Quotes

Abstract

Learning to rank is an important area at the interface of machine learning, information retrieval and Web search. The central challenge in optimizing various measures of ranking loss is that the objectives tend to be non-convex and discontinuous. To make such functions amenable to gradient based optimization procedures one needs to design clever bounds. In recent years, boosting, neural networks, support vector machines, and many other techniques have been applied. However, there is little work on directly modeling a conditional probability Pr (y | x q) where y is a permutation of the documents to be ranked and xq represents their feature vectors with respect to a query q. A major reason is that the space of y is huge: n ! if n documents must be ranked. We first propose an intuitive and appealing expected loss minimization objective, and give an efficient shortcut to evaluate it despite the huge space of ys. Unfortunately, the optimization is non-convex, so we propose a convex approximation. We give a new, efficient Monte Carlo sampling method to compute the objective and gradient of this approximation, which can then be used in a quasi-Newton optimizer like LBFGS. Extensive experiments with the widely-used LETOR dataset show large ranking accuracy improvements beyond recent and competitive algorithms.


,

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2009 ConditionalModelsForNonSmoSoumen Chakrabarti
Avinava Dubey
Jinesh Machchhar
Chiranjib Bhattacharyya
Conditional Models for Non-smooth Ranking Loss FunctionsICDM 2009 Proceedingshttp://www.cse.iitb.ac.in/~soumen/doc/icdm2009/LogRank.pdf10.1109/ICDM.2009.492009