Design of a 2-Bit Neural Network Quantizer for Laplacian Source
View/ Open
Date
2021-07-22Authors
Perić, Zoran
Savić, Milan
Simić, Nikola
Denić, Bojan
Despotović, Vladimir
Metadata
Show full item recordAbstract
Achieving real-time inference is one of the major issues in contemporary neural network applications, as complex algorithms are frequently being deployed to mobile devices that have constrained storage and computing power. Moving from a full-precision neural network model to a lower representation by applying quantization techniques is a popular approach to facilitate this issue. Here, we analyze in detail and design a 2-bit uniform quantization model for Laplacian source due to its significance in terms of implementation simplicity, which further leads to a shorter processing time and faster inference. The results show that it is possible to achieve high classification accuracy (more than 96% in the case of MLP and more than 98% in the case of CNN) by implementing the proposed model, which is competitive to the performance of the other quantization solutions with almost optimal precision.
M category
M22openAccess
M22
openAccess