Novel Pruning Techniques in Convolutional-Neural Networks
Rahul, Hritik Dahiya1, Divyansh Singh2, Ishan Chawla3

1Rahul, Hritik Dahiya*, Assistant Professor in the Department of Computer Science and Engineering in Northern India.
2Engineering College affiliated to GGSIPU, Delhi Divyansh Singh, Pursuing B.Tech, Department of Computer Engineering at Delhi Technological University, India.
3Ishan Chawla, Pursuing B.Tech, Department of Computer Engineering at Delhi Technological University, India.

Manuscript received on March 30, 2020. | Revised Manuscript received on April 05, 2020. | Manuscript published on April 30, 2020. | PP: 1541-1548 | Volume-9 Issue-4, April 2020. | Retrieval Number: D8397049420/2020©BEIESP | DOI: 10.35940/ijeat.D8397.049420
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Deep Learning allows us to build powerful models to solve problems like image classification, time series prediction, natural language processing, etc. This is achieved at the cost of huge amounts of storage and processing requirements which are sometimes not possible in machines with limited resources. In this paper, we compare different methods which tackle this problem with network pruning. Selected few pruning methodologies from the deep learning literature were implemented to display their results. Modern neural architectures have a combination of different layers like convolutional layers, pooling layers, dense layers, etc. We compare pruning techniques for dense layers (such as unit/neuron pruning, and weight Pruning), and convolutional layers as well (using L1 norm, taylor expansion of loss to determine importance of convolutional filters, and Variable Importance in Projection using Partial Least Squares) for the image classification task. This study aims to ease the overhead in terms of optimization of the model for academic, as well as commercial, use of deep neural networks.
Keywords: Deep learning, Neural networks, Pruning deep networks, convolutional neural networks.