Aussie AI
LNS Applications
-
Book Excerpt from "Generative AI in C++"
-
by David Spuler, Ph.D.
LNS Applications
Various research has been done into using LNS systems in applications in AI/ML, but where it's not an end-to-end model. Some of the research papers include:
- M. Arnold, J. Cowles T. Bailey, and J. Cupal, 1991, Implementing back propagation neural nets with logarithmic arithmetic, International AMSE conference on Neural Nets, San Diego, 1991.
- M. G. Arnold, T. A. Bailey, J. J. Cupal, and M. D. Winkel, 1997, On the cost effectiveness of logarithmic arithmetic for backpropagation training on SIMD processors, in Proceedings of International Conference on Neural Networks (ICNN’97), vol. 2. IEEE, 1997, pp. 933–936. https://ieeexplore.ieee.org/document/616150 (Possibly the earliest paper with consideration of LNS as applied to AI models.)
- Min Soo Kim; Alberto A. Del Barrio; Román Hermida; Nader Bagherzadeh, 2018, Low-power implementation of Mitchell’s approximate logarithmic multiplication for convolutional neural networks, in Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, 2018, pp. 617–622. https://ieeexplore.ieee.org/document/8297391 (Use of Mitchell's approximate multiplier in CNNs.)
- Giuseppe C. Calafiore, Stephane Gaubert, Member, Corrado Possieri, 2020, A Universal Approximation Result for Difference of log-sum-exp Neural Networks, https://arxiv.org/abs/1905.08503 (Use of a logarithmic activation function.)
- Giuseppe C. Calafiore, Stephane Gaubert, Corrado Possieri, Log-sum-exp neural networks and posynomial models for convex and log-log-convex data, IEEE Transactions on Neural Networks and Learning Systems, 2019, https://arxiv.org/abs/1806.07850
- U. Lotric and P. Bulic, 2011, Logarithmic multiplier in hardware implementation of neural networks, in International Conference on Adaptive and Natural Computing Algorithms. Springer, April 2011, pp. 158–168. https://dl.acm.org/doi/10.5555/1997052.1997071
- HyunJin Kim; Min Soo Kim; Alberto A. Del Barrio; Nader Bagherzadeh, 2019, A cost-efficient iterative truncated logarithmic multiplication for convolutional neural networks, IEEE 26th Symposium on Computer Arithmetic (ARITH), https://ieeexplore.ieee.org/abstract/document/8877474 (Uses logarithmic multiplication algorithm.)
- Gao M, Qu G, 2018, Estimate and recompute: a novel paradigm for approximate computing on data flow graphs, IEEE Trans Comput Aided Des Integr Circuits Syst 39(2):335–345. https://doi.org/10.1109/TCAD.2018.2889662, https://ieeexplore.ieee.org/document/8588387 (Uses LNS as the representation to do approximate arithmetic.)
- H. Kim, M. S. Kim, A. A. Del Barrio, and N. Bagherzadeh, 2019, A cost-efficient iterative truncated logarithmic multiplication for convolutional neural networks, in 2019 IEEE 26th Symposium on Computer Arithmetic (ARITH). IEEE, June 2019, pp. 108–111, https://ieeexplore.ieee.org/document/8877474
- Arnold, M.G., 2002, Reduced power consumption for MPEG decoding with LNS, Proceedings of the IEEE International Conference on Application-Specific Systems, Architectures and Processors (ASAP 2002), IEEE Computer Society Press, Los Alamitos (2002) https://ieeexplore.ieee.org/document/1030705 (MPEG signal processing and LNS.)
- E. E. Swartzlander, D. V. S. Chandra, H. T. Nagle and S. A. Starks, 1983, Sign/logarithm architecture for FFT implementation, IEEE Trans. Comput., vol. C-32, June 1983. https://ieeexplore.ieee.org/document/1676274 (FFT applications of LNS.)
- M. S. Ansari, V. Mrazek, B. F. Cockburn, L. Sekanina, Z. Vasicek, and J. Han, 2019, Improving the accuracy and hardware efficiency of neural networks using approximate multipliers, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 28, no. 2, pp. 317–328, Oct 2019, https://ieeexplore.ieee.org/document/8863138
- Basetas C., Kouretas I., Paliouras V., 2007, Low-power digital filtering based on the logarithmic number system, International Workshop on Power and Timing Modeling, Optimization and Simulation. Springer, pp 546–555. https://doi.org/10.1007/978-3-540-74442-9_53, https://link.springer.com/chapter/10.1007/978-3-540-74442-9_53 (LNS in signal processing algorithms.)
- Biyanu Zerom, Mohammed Tolba, Huruy Tesfai, Hani Saleh, Mahmoud Al-Qutayri, Thanos Stouraitis, Baker Mohammad, Ghada Alsuhli, 2022, Approximate Logarithmic Multiplier For Convolutional Neural Network Inference With Computational Reuse, 2022 29th IEEE International Conference on Electronics, Circuits and Systems (ICECS), 24-26 October 2022, https://doi.org/10.1109/ICECS202256217.2022.9970861, https://ieeexplore.ieee.org/abstract/document/9970861/
- M. S. Ansari, B. F. Cockburn, and J. Han, 2020, An improved logarithmic multiplier for energy-efficient neural computing, IEEE Transactions on Computers, vol. 70, no. 4, pp. 614–625, May 2020. https://ieeexplore.ieee.org/document/9086744
- Tso-Bing Juang; Cong-Yi Lin; Guan-Zhong Lin, 2018, Area-delay product efficient design for convolutional neural network circuits using logarithmic number systems, in International SoC Design Conference (ISOCC). IEEE, 2018, pp. 170–171, https://ieeexplore.ieee.org/abstract/document/8649961
- M Arnold, 2023, Machine Learning using Logarithmic Arithmetic with Preconditioned Input to Mitchell's Method, 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), https://ieeexplore.ieee.org/document/10168554
- J. Bernstein, J. Zhao, M. Meister, M. Liu, A. Anandkumar, and Y. Yue, 2020, Learning compositional functions via multiplicative weight updates, in Proc. Adv. Neural Inf. Process. Syst. 33: Annu. Conf. Neural Inf. Process. Syst., 2020. https://proceedings.neurips.cc/paper/2020/hash/9a32ef65c42085537062753ec435750f-Abstract.html
- Mark Arnold; Ed Chester; Corey Johnson, 2020, Training neural nets using only an approximate tableless LNS ALU, 2020 IEEE 31st International Conference on Application-specific Systems, Architectures and Processors (ASAP), DOI: 10.1109/ASAP49362.2020.00020, https://ieeexplore.ieee.org/document/9153225
- J Cai, 2022, Log-or-Trig: Towards efficient learning in deep neural networks, Thesis, Graduate School of Engineering, Tokyo University of Agriculture and Technology, https://tuat.repo.nii.ac.jp/?action=repository_action_common_download&item_id=1994&item_no=1&attribute_id=16&file_no=3, PDF: https://tuat.repo.nii.ac.jp/index.php?action=pages_view_main&active_action=repository_action_common_download&item_id=1994&item_no=1&attribute_id=16&file_no=1&page_id=13&block_id=39
- Yu-Hsiang Huang; Gen-Wei Zhang; Shao-I Chu; Bing-Hong Liu; Chih-Yuan Lien; Su-Wen Huang, 2023, Design of Logarithmic Number System for LSTM, 2023 9th International Conference on Applied System Innovation (ICASI) https://ieeexplore.ieee.org/abstract/document/10179504/
- TY Cheng, Y Masuda, J Chen, J Yu, M Hashimoto, 2020, Logarithm-approximate floating-point multiplier is applicable to power-efficient neural network training, Integration, Volume 74, September 2020, Pages 19-31, https://www.sciencedirect.com/science/article/abs/pii/S0167926019305826 (Has some theory of log-domain operations for LNS; uses bitwidth scaling and logarithmic approximate multiplication.)
- TaiYu Cheng, Jaehoon Yu, M. Hashimoto, July 2019, Minimizing power for neural network training with logarithm-approximate floating-point multiplier, 2019 29th International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS), https://www.semanticscholar.org/paper/Minimizing-Power-for-Neural-Network-Training-with-Cheng-Yu/ab190dd47e4c16949276f98052847d1314d76543
- Mingze Gao; Gang Qu, 2017, Energy efficient runtime approximate computing on data flow graphs, 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), Nov. 2017, pp. 444–449, https://ieeexplore.ieee.org/document/8203811
- T. Cheng, et al., July 2019, Minimizing power for neural network training with logarithm-approximate floating-point multiplier, 2019 29th International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS), DOI:10.1109/PATMOS.2019.8862162, https://www.researchgate.net/publication/336439575_Minimizing_Power_for_Neural_Network_Training_with_Logarithm-Approximate_Floating-Point_Multiplier
- J Xu, Y Huan, LR Zheng, Z Zou, 2018, A low-power arithmetic element for multi-base logarithmic computation on deep neural networks, 2018 31st IEEE International System-on-Chip Conference (SOCC), https://ieeexplore.ieee.org/document/8618560
- MA Qureshi, A Munir, 2020, NeuroMAX: a high throughput, multi-threaded, log-based accelerator for convolutional neural networks, 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD), https://ieeexplore.ieee.org/document/9256558, PDF: https://dl.acm.org/doi/pdf/10.1145/3400302.3415638
- Min Soo Kim, 2020, Cost-Efficient Approximate Log Multipliers for Convolutional Neural Networks, Ph.D. thesis, Electrical and Computer Engineering, University of California, Irvine, https://search.proquest.com/openview/46b6f28a9f1e4013a01f128c36753d83/1?pq-origsite=gscholar&cbl=18750&diss=y, PDF: https://escholarship.org/content/qt3w4980x3/qt3w4980x3.pdf (Examines multiple approximate log multipliers and their effect on model accuracy.)
- G. Anushaa, K. C. Sekharb, B S Sridevic, Nukella Venkateshd, 2023, The Journey of Logarithm Multiplier: Approach, Development and Future Scope, Recent Developments in Electronics and Communication Systems, https://www.researchgate.net/publication/367067187_The_Journey_of_Logarithm_Multiplier_Approach_Development_and_Future_Scope
For more research papers on applications of LNS models, see https://www.aussieai.com/research/logarithmic#applications.
• Next: • Up: Table of Contents |
The new AI programming book by Aussie AI co-founders:
Get your copy from Amazon: Generative AI in C++ |