• Српски
    • Српски (Serbia)
    • English
  • Српски (ћирилица) 
    • Српски (ћирилица)
    • Српски (латиница)
    • Енглески
  • Пријава
Преглед рада 
  •   ПЛАТОН
  • Природно-математички факултет
  • Главна колекција / Main Collection
  • Преглед рада
  •   ПЛАТОН
  • Природно-математички факултет
  • Главна колекција / Main Collection
  • Преглед рада
JavaScript is disabled for your browser. Some features of this site may not work without it.

Quantization of Weights of Neural Networks with Negligible Decreasing of Prediction Accuracy

Thumbnail
Отварање
28468-Article Text-102355-1-10-20210924 (3).pdf (71.43Kb)
Датум постављања документа
2021-09-24
Аутори
Perić, Zoran
Denić, Bojan
Savić, Milan
Dinčić, Milan
Mihajlov, Darko
Метаподаци
Приказ свих података о документу
Апстракт
Quantization and compression of neural network parameters using the uniform scalar quantization is carried out in this paper. The attractiveness of the uniform scalar quantizer is reflected in a low complexity and relatively good performance, making it the most popular quantization model. We present a design approach for the memoryless Laplacian source with zero-mean and unit variance, which is based on iterative rule and uses the minimal mean-squared error distortion as a performance criterion. In addition, we derive closed-form expressions for SQNR (Signal to Quantization Noise Ratio) in a wide dynamic range of variance of input data. To show effectiveness on real data, the proposed quantizer is used to compress the weights of neural networks using bit rates from 9 to 16 bps (bits/sample) instead of standardly used 32 bps full precision bit rate. The impact of weights compression on the NN (neural network) performance is analyzed, indicating good matching with the theoretical results and showing negligible decreasing of the prediction accuracy of the NN even in the case of high variance-mismatch between the variance of NN weights and the variance used for the design of quantizer, if the value of the bit-rate is properly chosen according to the rule proposed in the paper. The proposed method could be possibly applied in some of the edge-computing frameworks, as simple uniform quantization models contribute to faster inference and data transmission.
URI
https://platon.pr.ac.rs/handle/123456789/1187
DOI
http://dx.doi.org/10.5755/j01.itc.50.3.28468
М категорија
M23
openAccess
M23
openAccess
Колекције
  • Главна колекција / Main Collection

DSpace software copyright © 2002-2016  DuraSpace
О ПЛАТОН репозиторијуму | Пошаљите запажања
Theme by 
Atmire NV
 

 

Комплетан репозиторијумИнституцијеПо датуму издавањаАуториНасловиТемеОва институцијаПо датуму издавањаАуториНасловиТеме

Мој налог

ЛогинРегистрација

DSpace software copyright © 2002-2016  DuraSpace
О ПЛАТОН репозиторијуму | Пошаљите запажања
Theme by 
Atmire NV