He-Zhang-Ren-Sun Deep Residual Network
(Redirected from Deep Residual Networks with 1K Layers)
Jump to navigation
Jump to search
A He-Zhang-Ren-Sun Deep Residual Network is a Deep Residual Neural Network that contains up to 1k layers and that has developed by He et al. for the ImageNet Large Scale Visual Recognition Challenge 2015.
- AKA: ResNet-1k-Layers, Deep Residual Networks with 1K Layers.
- Context:
- It was the winner of the ILSVRC 2015.
- Source code and examples available online at: https://github.com/KaimingHe/resnet-1k-layers
- Example(s):
- Counter-Example(s):
- See: Residual Neural Network, Convolutional Neural Network, Machine Learning, Deep Learning, Machine Vision.
References
2016a
- (He et al., 2016a) ⇒ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. (2016). “Deep Residual Learning for Image Recognition.” In: Proceedings 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016).
- QUOTE: Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1&time;1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.
Figure 3. Example network architectures for ImageNet. Left: the VGG-19 model (19.6 billion FLOPs) as a reference. Middle: a plain network with 34 parameter layers (3.6 billion FLOPs). Right: a residual network with 34 parameter layers (3.6 billion FLOPs). The dotted shortcuts increase dimensions. Table 1 shows more details and other variants.
- QUOTE: Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1&time;1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.
2016b
- (He et al., 2016b) ⇒ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. (2016). “Identity Mappings in Deep Residual Networks.” In: Proceedings of the 14th European Conference on Computer Vision (ECCV 2016) Part IV. DOI:10.1007/978-3-319-46493-0_38.