ImageNet Large Scale Visual Recognition Challenge 2012 (ILSVRC2012) Dataset
(Redirected from ILSVRC2012 dataset)
Jump to navigation
Jump to search
An ImageNet Large Scale Visual Recognition Challenge 2012 (ILSVRC2012) Dataset is a visual recognition dataset associated with the ImageNet Large Scale Visual Recognition Challenge 2012 (ILSVRC2012) Task.
- …
- Example(s):
- See: CIFAR10 Data, STL-10 Data.
References
2018
- http://www.image-net.org/challenges/LSVRC/2012/
- QUOTE: The validation and test data for this competition will consist of 150,000 photographs, collected from flickr and other search engines, hand labeled with the presence or absence of 1000 object categories. The 1000 object categories contain both internal nodes and leaf nodes of ImageNet, but do not overlap with each other. A random subset of 50,000 of the images with labels will be released as validation data included in the development kit along with a list of the 1000 categories [1]. The remaining images will be used for evaluation and will be released without labels at test time.
The training data, the subset of ImageNet containing the 1000 categories and 1.2 million images, will be packaged for easy downloading. The validation and test data for this competition are not contained in the ImageNet training data (we will remove any duplicates).
Browse the training images of the 1000 categories here [2].
- QUOTE: The validation and test data for this competition will consist of 150,000 photographs, collected from flickr and other search engines, hand labeled with the presence or absence of 1000 object categories. The 1000 object categories contain both internal nodes and leaf nodes of ImageNet, but do not overlap with each other. A random subset of 50,000 of the images with labels will be released as validation data included in the development kit along with a list of the 1000 categories [1]. The remaining images will be used for evaluation and will be released without labels at test time.
2017
- (Samek et al., 2017) ⇒ Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, and Klaus-Robert Müller. (2017). “Evaluating the Visualization of What a Deep Neural Network Has Learned.” In: IEEE Transactions on neural networks and learning systems 28, no. 11
- QUOTE: … … In this paper, we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets … …