Aussie AI
Approximate Multiplication
-
Book Excerpt from "Generative AI in C++"
-
by David Spuler, Ph.D.
Approximate Multiplication
Approximate multiplication algorithms can be used to avoid full multiplications. There is extensive literature on various approximations to multiplications. See Chapter 53 for more information on approximate multiplication arithmetic. Here are some of the research papers on approximate multiplication used in neural networks:
- Min Soo Kim, Alberto Antonio Del Barrio Garcia, Hyunjin Kim, and Nader Bagherzadeh, 2020, The effects of approximate multiplication on convolutional neural networks, July 2020, IEEE Transactions on Emerging Topics, https://arxiv.org/abs/2007.10500
- M. S. Kim, A. A. Del Barrio, L. T. Oliveira, R. Hermida, and N. Bagherzadeh, 2018, Efficient Mitchell’s approximate log multipliers for convolutional neural networks, IEEE Transactions on Computers, vol. 68, no. 5, pp. 660–675, 2018, https://ieeexplore.ieee.org/document/8532287
- M. S. Ansari, V. Mrazek, B. F. Cockburn, L. Sekanina, Z. Vasicek, and J. Han, 2019, Improving the accuracy and hardware efficiency of neural networks using approximate multipliers, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 28, no. 2, pp. 317–328, 2019, https://ieeexplore.ieee.org/document/8863138
- V. Mrazek, Z. Vasicek, L. Sekanina, M. A. Hanif, and M. Shafique, 2019, Alwann: Automatic layer-wise approximation of deep neural network accelerators without retraining, in 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). IEEE, 2019, pp. 1–8, https://arxiv.org/abs/1907.07229, Code: https://github.com/ehw-fit/tf-approximate
- V. Mrazek, S. S. Sarwar, L. Sekanina, Z. Vasicek, and K. Roy, 2016, Design of power-efficient approximate multipliers for approximate artificial neural networks, in 2016 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), 2016, pp. 1–7, https://ieeexplore.ieee.org/document/7827658
- S. S. Sarwar, S. Venkataramani, A. Raghunathan, and K. Roy. 2016, Multiplier-less artificial neurons exploiting error resiliency for energy-efficient neural computing, In 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE), pages 145–150. IEEE, 2016, https://arxiv.org/abs/1602.08557 (Uses an approximate multiplier.)
- R Yin, Y Li, A Moitra, P Panda, Sep 2023, MINT: Multiplier-less Integer Quantization for Spiking Neural Networks, https://arxiv.org/abs/2305.09850
• Next: • Up: Table of Contents |
The new AI programming book by Aussie AI co-founders:
Get your copy from Amazon: Generative AI in C++ |