Aussie AI

Addition Optimizations

  • Book Excerpt from "Generative AI in C++"
  • by David Spuler, Ph.D.

Addition Optimizations

Addition is not the main bottleneck when compared to multiplication, but there are various ways to improve addition, or to use addition in optimization of neural networks.

Addition has a role in optimization techniques such as:

  • Adder networks (and other types of multiplication-free networks)
  • Add-as-integer approximate multiplication
  • Logarithmic models (because logarithms convert multiplications to additions)
  • Binary quantization or ternary quantization (only requires additions and/or subtractions, or neither if bitwise operators used)
  • Approximate addition algorithms
  • Max-Plus networks (using addition and maximum operations)
  • Log-sum-exp (LSE) networks

Research papers on addition optimizations such as approximate addition:

  1. V. Gupta, D. Mohapatra, S.P. Park, A. Raghunathan, 2011, IMPACT: IMPrecise adders for low-power approximate computing, International Symposium on Low Power Electronics and Design (ISLPED), pp. 409–414, 2011, https://dl.acm.org/doi/10.5555/2016802.2016898
  2. V. Gupta, D. Mohapatra, A. Raghunathan, K. Roy, 2013, Low-Power Digital Signal Processing Using Approximate Adders, IEEE Transaction on CAD of Integrated Circuits and Systems 32(1): 124-137, 2013, https://dl.acm.org/doi/10.1109/TCAD.2012.2217962
  3. M. Shafique, W. Ahmad, R. Hafiz, J. Henkel, 2015, A Low Latency Generic Accuracy Configurable Adder, IEEE/ACM Design Automation Conference (DAC), 2015, https://ieeexplore.ieee.org/abstract/document/7167270
  4. R. Ye, T. Wang, F. Yuan, R. Kumar, Q. Xu, 2013, On reconfiguration-oriented approximate adder design and its application, International Conference on Computer-Aided Design (ICCAD), pp.48-54, 2013, PDF: https://www.cse.cuhk.edu.hk/~qxu/ye-iccad13.pdf
  5. J. Miao, K. He, A. Gerstlauer, M. Orshansky, 2012, Modeling and synthesis of quality-energy optimal approximate adders, International Conference on Computer Aided Design (ICCAD), pp. 728-735, 2012, https://ieeexplore.ieee.org/document/6386754
  6. A. B. Kahng, S. Kang, 2012, Accuracy-configurable adder for approximate arithmetic designs, IEEE/ACM Design Automation Conference (DAC), pp.820-825, 2012, https://ieeexplore.ieee.org/document/6241600
  7. S. Mazahir, O. Hasan, R. Hafiz, M. Shafique, J. Henkel, 2016, An Area-Efficient Consolidated Configurable Error Correction for Approximate Hardware Accelerators, ACM/EDAC/IEEE 53rd Design Automation Conference (DAC), 2016, https://ieeexplore.ieee.org/document/7544339
  8. N. Zhu, W.-L. Goh, K.-S. Yeo, 2009, An enhanced low-power high-speed Adder for Error-Tolerant application, 12th International Symposium on Integrated Circuits (ISIC), 2009, https://ieeexplore.ieee.org/document/5403865
  9. Ryu, H. Kim, W. Yi and J.-J. Kim, 2019, BitBlade: Area and energy-efficient precision-scalable neural network accelerator with bitwise summation, Proc. 56th Annu. Design Autom. Conf., pp. 1-6, Jun. 2019. https://ieeexplore.ieee.org/document/8807054
  10. Ao Ren, Ji Li, Zhe Li, Caiwen Ding, Xuehai Qian, Qinru Qiu, Bo Yuan, Yanzhi Wang, 2017, SC-DCNN: Highly-scalable deep convolutional neural network using stochastic computing, ACM SIGPLAN Notices, vol. 52, no. 4, pp. 405-418, 2017. https://arxiv.org/abs/1611.05939 (Stochastic method with multiplication and addition approximations via AND gates and multiplexers.)

For more research papers on optimization of addition arithmetic, see https://www.aussieai.com/research/addition.

 

Next:

Up: Table of Contents

Buy: Generative AI in C++: Coding Transformers and LLMs

Generative AI in C++ The new AI programming book by Aussie AI co-founders:
  • AI coding in C++
  • Transformer engine speedups
  • LLM models
  • Phone and desktop AI
  • Code examples
  • Research citations

Get your copy from Amazon: Generative AI in C++