SqueezeNet
A SqueezeNet is a Compressed Deep Convolutional Neural Network that contains Fire Modules developed for the ICLR 2017 by the DeepScale Research Group.
- Countext:
- It usually consists of the following Neural Network Layers:
- a input layer;
- 10 convolutional layers;
- 8 max-pooling layers;
- 9 fire module layers;
- 1 average pooling layer;
- an ouput layer with a softmax activation function.
- It usually consists of the following Neural Network Layers:
- Example(s):
- a SqueezeNet implementation in the MXNet Framework:
- a SqueezeNet implementation in the Chainer Framework
- a SqueezeNet implementation in the Keras FrameWork:
- a SqueezeNet implementation in the Tensorflow Framework:
- a SqueezeNet implementation in the PyTorch Framework:
- https://github.com/pytorch/vision/blob/master/torchvision/models/squeezenet.py
torchvision.models.squeezenet1_0(pretrained=False, **kwargs)
[1]torchvision.models.squeezenet1_1(pretrained=False, **kwargs)
[2]
- a SqueezeNet implementation in the CoreML Framework:
- Counter-Example(s):
- Compressed Deep Neural Networks such as:
- a SqueezeNext Network,
- a SqueezeDet Network,
- an ENet,
- a SegNet,
- a MobileNet,
- Deep Convolutional Neural Networks such as:
- Compressed Deep Neural Networks such as:
- See: Convolutional Neural Network, Deep Compression, Machine Learning, Deep Learning, Machine Vision.
References
2018a
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/SqueezeNet Retrieved:2018-10-7.
- SqueezeNet is the name of a deep neural network that was released in 2016. SqueezeNet was developed by researchers at DeepScale, University of California, Berkeley, and Stanford University. In designing SqueezeNet, the authors' goal was to create a smaller neural network with fewer parameters that can more easily fit into computer memory and can more easily be transmitted over a computer network.
2018b
- (DT42, 2018) ⇒ SqueezeNet Keras Implementation: https://github.com/DT42/squeezenet_demo Retrieved:2018-12-16.
- QUOTE: This repository contains only the Keras implementation of the model, for other parameters used, please see the demo script, squeezenet_demo.py in the simdat package.
The training process uses a total of 2,600 images with 1,300 images per class (so, total two classes only). There are a total 130 images used for validation. After 20 epochs, the model achieves the following:
loss: 0.6563 - acc: 0.7065 - val_loss: 0.6247 - val_acc: 0.8750
Model Visualization
https://github.com/DT42/squeezenet_demo/blob/master/model.png
- QUOTE: This repository contains only the Keras implementation of the model, for other parameters used, please see the demo script, squeezenet_demo.py in the simdat package.
2016
- (Iandola et al., 2016) ⇒ Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer (2016). "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size". arXiv preprint [https://arxiv.org/abs/1602.07360 arXiv:1602.07360.
- QUOTE: We define the Fire module as follows. A Fire module is comprised of: a squeeze convolution layer (which has only 1x1 filters), feeding into an expand layer that has a mix of 1x1 and 3x3 convolution filters; we illustrate this in Figure 1.
Figure 1: Microarchitectural view: Organization of convolution filters in the Fire module. In this example, s1x1 = 3, e1x1 = 4, and e3x3 = 4. We illustrate the convolution filters but not the activations.
(...)We illustrate in Figure 2 that SqueezeNet begins with a standalone convolution layer (conv1), followed by 8 Fire modules (fire2-9), ending with a final conv layer (conv10). We gradually increase the number of filters per fire module from the beginning to the end of the network. SqueezeNet performs max-pooling with a stride of 2 after layers conv1, fire4, fire8, and conv10;
Figure 2: Macroarchitectural view of our SqueezeNet architecture. Left: SqueezeNet (Section 3.3); Middle: SqueezeNet with simple bypass (Section 6); Right: SqueezeNet with complex bypass (Section 6).(...). Now, with SqueezeNet, we achieve a 50X reduction in model size compared to AlexNet, while meeting or exceeding the top-1 and top-5 accuracy of AlexNet.
- QUOTE: We define the Fire module as follows. A Fire module is comprised of: a squeeze convolution layer (which has only 1x1 filters), feeding into an expand layer that has a mix of 1x1 and 3x3 convolution filters; we illustrate this in Figure 1.