Aussie AI
Logarithmic Approximate Multiplication
-
Book Excerpt from "Generative AI in C++"
-
by David Spuler, Ph.D.
Logarithmic Approximate Multiplication
The most common method to approximate multiplication is to use addition of the logarithms of two numbers, but more generally than via simple bitshifting. This approach is similar to logarithmic quantization (power-of-two quantization). These papers specifically use logarithmic approximation methods.
- P. Gysel, J. Pimentel et al., 2018, Ristretto: A framework for empirical study of resource-efficient inference in convolutional neural networks, IEEE Trans. Neural Netw. Learn. Syst., 2018, https://ieeexplore.ieee.org/abstract/document/8318896
- Min Soo Kim; Alberto A. Del Barrio; Leonardo Tavares Oliveira; Román Hermida; Nader Bagherzadeh, 2018, Efficient Mitchell’s Approximate Log Multipliers for Convolutional Neural Networks, IEEE Transactions on Computers, Volume 68 Issue 5, p.660-675, November 2018, https://ieeexplore.ieee.org/abstract/document/8532287
- T. Hokchhay, S. Hashemi, R. I. Bahar, and S. Reda, 2017, Hardware-software codesign of accurate, multiplier-free deep neural networks, in Proc. 54th Annu. Design Autom. Conf. (DAC), 2017, pp. 1–6., https://arxiv.org/abs/1705.04288
- M. S. Ansari, B. F. Cockburn, and J. Han, 2020, An improved logarithmic multiplier for energy-efficient neural computing, IEEE Transactions on Computers, 2020, https://ieeexplore.ieee.org/document/9086744
- J. N. Mitchell, 1962, Computer multiplication and division using binary logarithms, IEEE Trans. Electron. Comput., vol. EC-11, no. 4, pp. 512–517, Aug. 1962, https://ieeexplore.ieee.org/document/5219391
- Z. Babic, A. Avramovic, and P. Bulic, 2011, An iterative logarithmic multiplier, Microprocess. Microsyst., vol. 35, no. 1, pp. 23–33, Feb. 2011, https://dl.acm.org/doi/10.1016/j.micpro.2010.07.001
- U. Lotric and P. Bulic, 2012, Applicability of approximate multipliers in hardware neural networks, Neurocomput., vol. 96, pp. 57–65, Nov. 2012, https://dl.acm.org/doi/10.1016/j.neucom.2011.09.039
- Z. Du, K. Palem, A. Lingamneni, O. Temam, Y. Chen, and C. Wu, 2014, Leveraging the error resilience of machine-learning applications for designing highly energy efficient accelerators, in Proc. 19th Asia South Pacific Des. Autom. Conf., 2014, pp. 201–206, https://pages.saclay.inria.fr/olivier.temam/files/eval/DLCPTW2014.pdf
- M. S. Kim, A. A. D. Barrio, R. Hermida, and N. Bagherzadeh, 2018, Low-power implementation of Mitchell’s approximate logarithmic multiplication for convolutional neural networks, in Proc. 23rd Asia South Pacific Des. Autom. Conf., 2018, pp. 617–622, https://ieeexplore.ieee.org/document/8297391 (Approximate logarithm approach using the Logarithm Number System.)
- S. S. Sarwar, S. Venkataramani, A. Raghunathan, and K. Roy, 2016, Multiplier-less artificial neurons exploiting error resiliency for energy-efficient neural computing, in Proc. Des. Autom. Test Eur. Conf. Exhib., 2016, pp. 145–150, https://arxiv.org/abs/1602.08557
- Rezaei S, Omidi R and Azarpeyvand A. (2022), Logarithm-approximate floating-point multiplier, Microelectronics Journal, 127:C, Volume 127, September 2022, 105521, https://doi.org/10.1016/j.mejo.2022.105521
- M. Skrbek, 1999, Fast neural network implementation, Neural Network World 5 (1999), 375–391, https://www.researchgate.net/publication/265303033_Fast_neural_network_implementation (Uses shift-add methods.)
- T. Mogami, 2020, Deep neural network training without multiplications, In Beyond BackPropagation WS at 34th Conference on Neural Information Processing Systems, 2020, https://arxiv.org/abs/2012.03458 (This multiplication of floating-point numbers with integer addition is effectively using Mitchell's approximate multiplication.)
- G Alsuhli, V Sakellariou, H Saleh, M Al-Qutayri, 2023, Number Systems for Deep Neural Network Architectures: A Survey, 2023, https://arxiv.org/abs/2307.05035
- Y Wu, C Chen, W Xiao, X Wang, C Wen, J Han, 2023, A Survey on Approximate Multiplier Designs for Energy Efficiency: From Algorithms to Circuits, arXiv preprint, 2023, https://arxiv.org/abs/2301.12181
- Durgesh Nandan; Jitendra Kanungo; Anurag Mahajan, 2017, An efficient VLSI architecture for iterative logarithmic multiplier, 2017 4th International Conference on Signal Processing and Integrated Networks (SPIN), February 2017, https://ieeexplore.ieee.org/document/8049986 (Uses LNS and Mitchell's approximate multiplication algorithm.)
- Uroš Lotrič, Ratko Pilipović, Patricio Bulić, 2021, A Hybrid Radix-4 and Approximate Logarithmic Multiplier for Energy Efficient Image Processing, Electronics, vol.10, no.10, pp.1175, 2021. https://doi.org/10.3390/electronics10101175
- J Cai, 2022, Log-or-Trig: Towards efficient learning in deep neural networks, Thesis, Graduate School of Engineering, Tokyo University of Agriculture and Technology, https://tuat.repo.nii.ac.jp/?action=repository_action_common_download&item_id=1994&item_no=1&attribute_id=16&file_no=3, PDF: https://tuat.repo.nii.ac.jp/index.php?action=pages_view_main&active_action=repository_action_common_download&item_id=1994&item_no=1&attribute_id=16&file_no=1&page_id=13&block_id=39 (Examines logarithmic LNS multiplication and also trigonometric methods.)
- Mark Arnold, 2023, Machine Learning using Logarithmic Arithmetic with Preconditioned Input to Mitchell's Method, 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), https://ieeexplore.ieee.org/abstract/document/10168554/
For more research papers on logarithmic approximate multiplication, see https://www.aussieai.com/research/multiplication#logarithmic-multiplication.
• Next: • Up: Table of Contents |
The new AI programming book by Aussie AI co-founders:
Get your copy from Amazon: Generative AI in C++ |