2016 ImagesDontLieTransferringDeepVi
- (Lynch et al., 2016) ⇒ Corey Lynch, Kamelia Aryafar, and Josh Attenberg. (2016). “Images Don't Lie: Transferring Deep Visual Semantic Features to Large-Scale Multimodal Learning to Rank.” In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ISBN:978-1-4503-4232-2 doi:10.1145/2939672.2939728
Subject Headings: Pairwise IR Algorithm.
Notes
Cited By
- http://scholar.google.com/scholar?q=%222016%22+Images+Don%27t+Lie%3A+Transferring+Deep+Visual+Semantic+Features+to+Large-Scale+Multimodal+Learning+to+Rank
- http://dl.acm.org/citation.cfm?id=2939672.2939728&preflayout=flat#citedby
Quotes
Abstract
Search is at the heart of modern e-commerce. As a result, the task of ranking search results automatically (learning to rank) is a multibillion dollar machine learning problem. Traditional models optimize over a few hand-constructed features based on the item's text. In this paper, we introduce a multimodal learning to rank model that combines these traditional features with visual semantic features transferred from a deep convolutional neural network. In a large scale experiment using data from the online marketplace Etsy, we verify that moving to a multimodal representation significantly improves ranking quality. We show how image features can capture fine-grained style information not available in a text-only representation. In addition, we show concrete examples of how image information can successfully disentangle pairs of highly different items that are ranked similarly by a text-only model.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2016 ImagesDontLieTransferringDeepVi | Josh Attenberg Corey Lynch Kamelia Aryafar | Images Don't Lie: Transferring Deep Visual Semantic Features to Large-Scale Multimodal Learning to Rank | 10.1145/2939672.2939728 | 2016 |