2014 LearningEverythingAboutAnything
- (Divvala et al., 2014) ⇒ Santosh K Divvala, Ali Farhadi, and Carlos Guestrin. (2014). “Learning Everything About Anything: Webly-Supervised Visual Concept Learning.” In: CVPR-2014.
Subject Headings: Visual Concept Learning
Notes
Cited By
Quotes
Abstract
Intra-class appearance variation has been regarded as one of the main nuisances in recognition. Recent works have proposed several interesting cues to reduce the vi- sual complexity of a class, ranging from the use of simple annotations such as viewpoint or aspect-ratio to those re- quiring expert knowledge, e.g., visual phrases, poselets, at- tributes, etc. However, exploring intra-class variance still remains open. In this paper, we introduce an approach to discover an exhaustive concept-specific vocabulary of vi- sual variance, that is biased towards what the human race has ever cared about. We present a fully automated method that learns models of actions, interactions, attributes and beyond for any concept including scenes, actions, objects, emotions, places, celebrities, professions, and even abstract concepts. Using our framework, we have already trained models for over 10000 variations within 100 concepts and automatically annotated over 2 million images. We show a list of potential applications that our model enables across vision and NLP. We invite the interested reader to use our (doubly anonymous) system at http://goo.gl/O99uZ2 to train a detector for a concept of their choice
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2014 LearningEverythingAboutAnything | Carlos Guestrin Santosh K Divvala Ali Farhadi | Learning Everything About Anything: Webly-Supervised Visual Concept Learning |