Aussie AI
Logarithmic LLMs
-
Last Updated 3 November, 2024
-
by David Spuler, Ph.D.
Logarithms use high-school mathematics to change multiplications into additions. And this is an interesting idea, given all the shade that's been thrown on the expensive multiplication operators in neural networks, even with all this hardware acceleration going on.
A few people have thought of this already, and the amount of literature is beyond extensive. There was much theory in the 1970s and 1980s, and some real hardware implementations in the 1990s and 2000s. There's also been a lot of more recent research on this area in the 2010s and 2020s.
Why Logarithms?
The basic idea is that logarithms have this property:
log (x * y) = log(x) + log(y)
Unfortunately, the same is not true for addition:
log (x + y) != log(x) + log(y)
Types of AI Logarithmic Algorithms
Some of the ways in which the use of logarithms has been researched in relation to neural networks include:
- Logarithmic bitshift quantization usings powers-of-two (see quantization)
- Dyadic number quantization (see dyadic quantization)
- Multiplication approximations using logs (see advanced math)
- Logarithmic number system-based models (see LNS section below)
- Logarithmic arbitrary-base quantization (see quantization)
Logarithmic Number System (LNS)
The LNS is a numeric representation that uses logarithms, but isn't the standard mathematical logarithmic arithmetic. It has been considered for use with neural networks as far back as 1997. In the LNS, multiplication changes to a fast additive operation, but addition itself becomes expensive. Thus, computation of vector dot products or multiply-add operations in machine learning are problematic, with various theoretical attempts to overcome the difficulties (with addition operators) researched in the literature.
LNS is not the only unusual theoretical number system available. In addition to the simple floating-point and fixed-point representations, LNS should be compared to other complicated number systems considered for machine learning, including the Residue Number System (RNS), Posit number system, and Dyadic numbers (see advanced mathematics).
Logarithmic Models
A pure logarithmic model is one that maintains its calculations using the Logarithmic Number System. Alsuhli et al. (2023) refers to this approach as an "end-to-end" LNS model, which means performing all calculations the "log-domain" (i.e. working on logarithms of values, rather than the original values). The idea is basically to change a multiplication by a weight into an addition, and any division into a subtraction. Instead of weights, the logarithm of a weight is stored and used throughout the layers. Also, intermediate computations should be stored as a logarithmic value, such as embeddings or probabilities, so that both sides of a MatMul are logarithmic, allowing addition to be used instead of arithmetic multiplication operations. This requires adjustments to other Transformer architectural components, such as normalization and Softmax. Theoretically, it should be workable once everything is changed to log-domain. However, practical problems arise because MatMul and vector dot product also require addition operations (after the multiplications), and LNS addition is slow because log-domain addition isn't normal addition, and cannot be easily hardware-accelerated.
Logarithmic weight arithmetic differs from normal weight multiplication. For weights greater than 1, the logarithm is positive and addition occurs; for weights from 0..1, which are effectively a division, the logarithm is negative and subtraction is used (or adding of a negative value, equivalently). If the weight is exactly 1, the logarithm is exactly 0, and adding 0 is as harmless as multiplying by 1. Potentially, the technique could involve integers or floating-point numbers to represent the logarithm.
Several problems need to be overcome to use LNS for models, including the cost of addition and handling of zero and negative numbers. Addition and subtraction are slow and problematic in LNS-based systems, so must be approximated or accelerated in various ways. It seems ironic to need to accelerate addition, since the whole point of the use of LNS is to accelerate multiplication by changing it into addition! But it's two different types of addition: the original linear-domain multiplication changes to normal fast addition, but then the original addition needs to change to log-domain addition, which is hard.
Zero weights must be handled separately, since the logarithm of zero is infinite. This requires a test for zero as part of the logic, or an algorithmic method to avoid zero values (e.g. using an extra bit flag to represent zero-ness). Alternatively, a hardware version of LNS would need to handle a zero reasonably.
Negative numbers are also problematic in the LNS, and models usually have both positive and negative weights. Since logarithms cannot be used on a negative number, the logarithm of the absolute value of the weight must be used, with an alternative method (e.g. sign bit) used to handle negative weights differently, so that the engine knows to subtract the weight's logarithm, rather than add in the LNS arithmetic. Alternatively, weights might be scaled so they are all positive, to avoid the log-of-negatives problem.
Does it work? The use of logarithmic numbers hasn't become widely used in AI models, possibly because vector dot product and matrix multiplication require not just multiplication, but addition of multiplications, and addition is difficult in LNS (usually approximate). Both training and inference need to be performed in LNS because it is approximate. Conversion back-and-forth between LNS and floating-point weights and probabilities also adds some overhead (in both training and inference), and possibly some more inaccuracy for inference. These issues might limit the model's accuracy compared to non-logarithmic floating point.
Furthermore, an LNS model stores the logarithms of weights as floating point numbers, and thus requires floating point addition rather than integer addition. The gain from changing floating point multiplication to floating point addition is nowhere near as large as changing it to integer arithmetic operations (e.g. as used in logarithmic quantization or integer-only quantization methods). Indeed, paradoxically, there are even circumstances where floating point addition is worse than floating point multiplication, because addition requires sequential non-parallelizable sub-operations, but this depends on the hardware acceleration and the exact representation of floating point numbers used.
Another concern is that some papers report that model inference is memory-bound rather than CPU-bound. In such cases, the conversion of arithmetic from multiplicaton to addition does not address the main bottleneck, and the LNS may have reduced benefit. The LNS does not allow the use of smaller data sizes, since it stores logarithms of weights and internal computations as floating-point, whereas quantization can use integers or smaller bit widths.
Some of the problematic issues with additions involving weights and activation functions, and in relation to training with LNS weights, are described in Alsuhli et al. (2023). These concerns limit the use of LNS numbers in an end-to-end method, and suggest the use of alternatives such as approximate logarithmic multiplication or logarithm-antilogarithm multiplications (Alsuhli et al., 2023). Nevertheless, there are several attempts in the literature to use LNS for model training and inference in various ways, starting with Arnold et al. (1991), using theory dating back to the 1980s.
End-to-End Logarithmic Model Research
Papers on the "end-to-end" use of logarithmic weights are below. Some papers revert to linear-addition to resolve the problem with slow accumulation, whereas others used various approximations.
- D. Miyashita, E. H. Lee, and B. Murmann, “Convolutional neural networks using logarithmic data representation,” arXiv preprint arXiv:1603.01025, 2016. https://arxiv.org/abs/1603.01025 (A major paper on using log-domain weights and activations, using addition of log-domain values instead of multiplication, which also covers the difficulties with accumulation.)
- G. Alsuhli, V. Sakellariou, H. Saleh, M. Al-Qutayri, Number Systems for Deep Neural Network Architectures: A Survey, 2023, https://arxiv.org/abs/2307.05035 (Extensive survey paper with a deep dive into the theory of LNS and other systems such as Residue Number System and Posit numbers, with application to neural networks. Also covers LNS usage with activation functions and Softmax.)
- Saeedeh Jahanshahi, Amir Sabbagh Molahosseini & Azadeh Alsadat Emrani Zarandi, uLog: a software-based approximate logarithmic number system for computations on SIMD processors, 2023, Journal of Supercomputing 79, pages 1750–1783 (2023), https://link.springer.com/article/10.1007/s11227-022-04713-y (Paper licensed under CC-BY-4.0, unchanged: http://creativecommons.org/licenses/by/4.0/)
- A. Sanyal, P. A. Beerel, and K. M. Chugg, 2020, “Neural network training with approximate logarithmic computations,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 3122–3126. https://arxiv.org/abs/1910.09876 (End-to-end LNS model for both training and inference. Converts "leaky-ReLU" activation function and Softmax to log-domain.)
- J Zhao, S Dai, R Venkatesan, B Zimmer, 2022, LNS-Madam: Low-precision training in logarithmic number system using multiplicative weight update, IEEE Transactions on Computers, Vol. 71, No. 12, Dec 2022, https://ieeexplore.ieee.org/abstract/document/9900267/, PDF: https://ieeexplore.ieee.org/iel7/12/4358213/09900267.pdf (LNS in training of models. Uses different logarithm bases, including fractional powers of two, and LNS addition via table lookups.)
- E. H. Lee, D. Miyashita, E. Chai, B. Murmann, and S. S. Wong, “Lognet: Energy-efficient neural networks using logarithmic computation,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), March 2017, pp. 5900–5904. https://ieeexplore.ieee.org/document/7953288 (Uses LNS multiplication in the log-domain, but still does accumulate/addition in the linear-domain.)
- Maxime Christ, Florent de Dinechin, Frédéric Pétrot, 2022, Low-precision logarithmic arithmetic for neural network accelerators, 33rd IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP 2022), IEEE, Jul 2022, Gothenburg, Sweden. ff10.1109/ASAP54787.2022.00021ff. ffhal-03684585f, https://ieeexplore.ieee.org/abstract/document/9912091/, PDF: https://inria.hal.science/hal-03684585/document (Use of LNS in model inference, with coverage of dropping the sign bit and handling of zeros.)
- J. Johnson, “Rethinking floating point for deep learning,” arXiv preprint arXiv:1811.01721, 2018, https://arxiv.org/abs/1811.01721 (Uses an end-to-end LNS version called "exact log-linear multiply-add (ELMA)" which is a "hybrid log multiply/linear add" method. Uses a Kulisch accumulator for addition.)
- David Spuler, March 2024, Chapter 52. Logarithmic Models, Generative AI in C++: Coding Transformers and LLMs, https://www.amazon.com/dp/B0CXJKCWX9
LNS Addition and Subtraction Theory
Log-domain addition and subtraction is problematic and requires Gaussian logarithm functions to compute. There are various papers that cover different approximations or methods such as Look-Up Tables (LUTs), Taylor series, interpolations, co-transformations, and other methods.
There are several other areas of theory that are relevant to LNS addition. Because LNS addition is computing exponentials of log-domain values (i.e. antilogarithms), adding them, and then re-converting them to log-domain, this is a "log of a sum of exponentials" calculation, which is the same as "log-sum-exp networks". Also, the "sum of exponentials" is the same calculation required for part of Softmax calculations (the denominator), so the theory of Softmax approximation is relevant. Finally, since the use of the maximum function is one way to approximate LNS addition, the theory of "max-plus networks" based on "tropical algebra" is relevant to optimizing LNS addition.
Papers on LNS addition and subtraction:
- Wikipedia, Gaussian logarithm, https://en.wikipedia.org/wiki/Gaussian_logarithm
- Kouretas I, Basetas C, Paliouras V, 2012, Low-power logarithmic number system addition/subtraction and their impact on digital filters. IEEE Trans Comput 62(11):2196–2209. https://doi.org/10.1109/TC.2012.111, https://ieeexplore.ieee.org/document/6212439 (Coverage of LNS theory, including improvements to addition/subtraction, in relation to digial signal processing.)
- I. Orginos, V. Paliouras, and T. Stouraitis, “A novel algorithm for multi-operand Logarithmic Number System addition and subtraction using polynomial approximation”, in Proceedings of the 1995 IEEE International Symposium on Circuits and Systems (ISCAS’95), pp. III.1992–III.1995, 1995. https://ieeexplore.ieee.org/document/523812
- S. A. Alam, J. Garland, and D. Gregg, “Low-precision logarithmic number systems: Beyond base-2,” ACM Transactions on Architecture and Code Optimization (TACO), vol. 18, no. 4, pp. 1–25, 2021. https://arxiv.org/abs/2102.06681 (Covers LNS arithmetic in different bases, with coverage of LNS addition improvements.)
- A. Sanyal, P. A. Beerel, and K. M. Chugg, 2020, “Neural network training with approximate logarithmic computations,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 3122–3126. https://arxiv.org/abs/1910.09876 (End-to-end LNS paper that also covers addition approximations.)
- M. Arnold, J. Cowles T. Bailey, and J. Cupal, “Implementing back propagation neural nets with logarithmic arithmetic,” International AMSE conference on Neural Nets, San Diego, 1991. (Use of LUTs for LNS addition.)
- M. G. Arnold, T. A. Bailey, J. J. Cupal, and M. D. Winkel, 1997, “On the cost effectiveness of logarithmic arithmetic for backpropagation training on SIMD processors,” in Proceedings of International Conference on Neural Networks (ICNN’97), vol. 2. IEEE, 1997, pp. 933–936. https://ieeexplore.ieee.org/document/616150 (Uses LUTs for LNS addition.) li> J. Johnson, “Efficient, arbitrarily high precision hardware logarithmic arithmetic for linear algebra,” in Proc. IEEE 27th Symp. Comput. Arithmetic, 2020, pp. 25–32, https://arxiv.org/abs/2004.09313 (Dual-base LNS algorithm, attempting to solve the LNS addition bottleneck.)
- P. D. Vouzis, S. Collange and M. G. Arnold, "A Novel Cotransformation for LNS Subtraction", J. Signal Process. Syst., vol. 58, no. 1, pp. 29-40, Oct. 2008. https://doi.org/10.1007/s11265-008-0282-7, https://link.springer.com/article/10.1007/s11265-008-0282-7 (Improving LNS subtraction algorithms.)
- D. M. Lewis, 1990, “An architecture for addition and subtraction of long word length numbers in the logarithmic number system,” ZEEE Trans. Compur., vol. 39, pp. 1325-1336, Nov. 1990. https://ieeexplore.ieee.org/document/61042
- H. Henkel, 1989, "Improved Addition for the Logarithmic Number System", IEEE Trans. Acoustics Speech and Signal Processing, no. 2, pp. 301-303, Feb. 1989. https://ieeexplore.ieee.org/document/21694 (Improved lookup tables for LNS addition.)
- E. E. Swartzlander and A. G. Alexopoulos, 1975, "The sign/logarithm number system", IEEE Trans. Comput., vol. C-24, pp. 1238-1242, Dec. 1975. IEEE Transactions on Computers ( Volume C-24, Issue 12, December 1975), https://ieeexplore.ieee.org/document/1672765 (Handle negative numbers in LNS with sign bits.)
- I. Kouretas, C. Basetas and V. Paliouras, 2008, "Low-Power Logarithmic Number System Addition/Subtraction and their Impact on Digital Filters", Proc. IEEE Int'l Symp. Circuits and Systems (ISCAS '08), pp. 692-695, 2008. https://ieeexplore.ieee.org/document/4541512 (Improvements for slow addition/subtraction of LNS numbers.)
- M. Arnold and S. Collange, "A Real/Complex Logarithmic Number System ALU", IEEE Trans. Computers, vol. 60, no. 2, pp. 202-213, Feb. 2011. https://ieeexplore.ieee.org/document/5492676 (Hardware FPGAs and improvements to LNS addition.)
- R.C. Ismail and J.N. Coleman, "ROM-less LNS", Proc. IEEE Symp. Computer Arithmetic, pp. 43-51, 2011. https://ieeexplore.ieee.org/document/5992107 (Improvements to LNS addition and lookup tables.)
- R. Muscedere, V. Dimitrov, G. Jullien and W. Miller, "Efficient Techniques for Binary-to-Multidigit Multidimensional Logarithmic Number System Conversion using Range-Addressable Look-Up Tables", IEEE Trans. Computers, vol. 54, no. 3, pp. 257-271, Mar. 2005. https://ieeexplore.ieee.org/document/1388191 (Presents multidimensional logarithmic number system (MDLNS) and improvements to LNS addition lookup tables.)
- D Primeaux, 2005, Programming with Gaussian logarithms to compute the approximate addition and subtraction of very small (or very large) positive numbers, Sixth International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing and First ACIS International Workshop on Self-Assembling Wireless Network, https://ieeexplore.ieee.org/document/1434878/
- CH Cotter, The Journal of Navigation, 1971 - cambridge.org, Gaussian logarithms and navigation, https://www.cambridge.org/core/journals/journal-of-navigation/article/gaussian-logarithms-and-navigation/411E21946EDD70EE4208912BE743C5FB PDF: http://www.siranah.de/sources/Gaussian_Logarithms_and_Navigation.pdf
- Paliouras, V., and Stouraitis, T., 1996. A novel algorithm for accurate logarithmic number system subtraction. Proceedings of the IEEE International Symposium on Circuits and Systems (ICAS 96). Atlanta, USA, pp. 268-271. https://ieeexplore.ieee.org/document/542021
- MG Arnold, 2004, LPVIP: A low-power ROM-less ALU for low-precision LNS, International Workshop on Power and Timing Modeling, https://link.springer.com/chapter/10.1007/978-3-540-30205-6_69, PDF: https://www.researchgate.net/profile/Mark-Arnold-13/publication/220799326_LPVIP_A_low-power_ROM-less_ALU_for_low-precision_LNS/links/54172bc30cf2218008bed8c8/LPVIP-A-low-power-ROM-less-ALU-for-low-precision-LNS.pdf
- MG Arnold, J Cowles, T Bailey, 1988, Improved accuracy for logarithmic addition in DSP applications, ICASSP-88, https://ieeexplore.ieee.org/document/196947, PDF: https://www.computer.org/csdl/proceedings-article/icassp/1988/00196947/12OmNvlPkAA
- J. N. Coleman, R.C. Ismail, LNS with Co-Transformation Competes with Floating-Point, January 2015, IEEE Transactions on Computers 65(1):1-1, DOI:10.1109/TC.2015.2409059, https://ieeexplore.ieee.org/document/7061396, PDF: https://www.researchgate.net/publication/273914447_LNS_with_Co-Transformation_Competes_with_Floating-Point
- Siti Zarina Md Naziri; Rizalafande Che Ismail; Ali Yeon Md Shakaff, December 2014 The Design Revolution of Logarithmic Number System Architecture, DOI:10.13140/RG.2.1.3494.4166 Conference: 2014 2nd International Conference on Electrical, Electronic and Systems Engineering (ICEESE 2104)At: Berjaya Times Square, Kuala Lumpur, Malaysia, https://ieeexplore.ieee.org/document/7154603 (Good survey of LNS addition methods up to 2014.)
- Siti Zarina Md Naziri, Rizalafande Che Ismail, Ali Yeon Md Shakaff, Implementation of LNS addition and subtraction function with co-transformation in positive and negative region: A comparative analysis, Aug 2016, https://www.researchgate.net/publication/312159669_Implementation_of_LNS_addition_and_subtraction_function_with_co-transformation_in_positive_and_negative_region_A_comparative_analysis, PDF: https://www.researchgate.net/publication/287533531_Arithmetic_Addition_and_Subtraction_Function_of_Logarithmic_Number_System_in_Positive_Region_An_Investigation/link/56777a6208ae125516ec1034/download
- Siti Zarina Md Naziri; Rizalafande Che Ismail; Ali Yeon Md Shakaff, Dec 2015, Arithmetic Addition and Subtraction Function of Logarithmic Number System in Positive Region: An Investigation, 2015 IEEE Student Conference on Research and Development (SCOReD), https://ieeexplore.ieee.org/document/7449376
- G Tsiaras, V Paliouras, 2017, Multi-operand logarithmic addition/subtraction based on Fractional Normalization, 2017 6th International Conference on Modern Circuits and Systems Technologies (MOCAST), https://ieeexplore.ieee.org/abstract/document/7937686/
- G. Tsiaras and V. Paliouras, “Logarithmic Number System addition-subtraction using Fractional Normalization,” in IEEE International Symposium on Circuits and Systems (ISCAS), 2017. https://ieeexplore.ieee.org/document/8050569
- B Parhami, 2020, Computing with logarithmic number system arithmetic: Implementation methods and performance benefits, Computers & Electrical Engineering, Elsevier, PDF: https://web.ece.ucsb.edu/~parhami/pubs_folder/parh20-cee-comp-w-lns-arithmetic-final.pdf (Overview of LNS including LNS addition and hardware implementations.)
- B. Parhami, “Computing with Logarithmic Number System Arithmetic (Extended Online Version with More Reference Citations),” August 2020. https://web.ece.ucsb.edu/~parhami/pubs_folder/parh20-caee-comput-w-lns-arith.pdf
- R.C Ismail; R. Hussin; S.A.Z Murad, 2012, Interpolator algorithms for approximating the LNS addition and subtraction: Design and analysis, 2012 IEEE International Conference on Circuits and Systems (ICCAS), https://ieeexplore.ieee.org/document/6408336, PDF: https://www.researchgate.net/profile/Sohiful-Anuar-Zainol-Murad/publication/259921136_Interpolator_Algorithms_for_Approximating_the_LNS_Addition_and_Subtraction_Design_and_Analysis/links/02e7e52e8a7bef3d52000000/Interpolator-Algorithms-for-Approximating-the-LNS-Addition-and-Subtraction-Design-and-Analysis.pdf
- C Chen, 2009, Error analysis of LNS addition/subtraction with direct-computation implementation, IET Computers & Digital Techniques, Volume 3, Issue 4, https://digital-library.theiet.org/content/journals/10.1049/iet-cdt.2008.0098
- Chichyang Chen; Rui-Lin Chen; Chih-Huan Yang, 2000, Pipelined computation of very large word-length LNS addition/subtraction with polynomial hardware cost, IEEE Transactions on Computers (Volume 49, Issue 7, July 2000), https://ieeexplore.ieee.org/document/863041
- Siti Zarina Md Naziri; Rizalafande Che Ismail; Ali Yeon Md Shakaff, 2016, An Analysis of Interpolation Implementation for LNS Addition and Subtraction Function in Positive Region, 2016 International Conference on Computer and Communication Engineering (ICCCE) https://ieeexplore.ieee.org/abstract/document/7808368/
- I Osinin, 2019, Optimization of the hardware costs of interpolation converters for calculations in the logarithmic number system International Conference on Information Technologies, ICIT 2019: Recent Research in Control Engineering and Decision Making, pp. 91–102, https://link.springer.com/chapter/10.1007/978-3-030-12072-6_9
- Wenhui Zhang, Xinkuang Geng, Qin Wang, Jie Han, Honglan Jiang, 12 June 2024, A Low-Power and High-Accuracy Approximate Adder for Logarithmic Number System, GLSVLSI '24: Proceedings of the Great Lakes Symposium on VLSI 2024, June 2024, Pages 125–131, https://doi.org/10.1145/3649476.3658706 https://dl.acm.org/doi/abs/10.1145/3649476.3658706
- David Spuler, March 2024, Chapter 52. Logarithmic Models, Generative AI in C++: Coding Transformers and LLMs, https://www.amazon.com/dp/B0CXJKCWX9
- William James Dally, Rangharajan VENKATESAN, Brucek Kurdo Khailany, Stephen G. Tell, 2020, Asynchronous accumulator using logarithmic-based arithmetic, Nvidia Corp, US12033060B2, https://patents.google.com/patent/US12033060B2/en
- Vincenzo Liguori, 9 Jun 2024, Procrastination Is All You Need: Exponent Indexed Accumulators for Floating Point, Posits and Logarithmic Numbers, https://arxiv.org/abs/2406.05866
LNS in AI Models (and other Applications)
Other papers on the use of LNS in machine learning applications include:
- M. Arnold, J. Cowles T. Bailey, and J. Cupal, “Implementing back propagation neural nets with logarithmic arithmetic,” International AMSE conference on Neural Nets, San Diego, 1991.
- M. G. Arnold, T. A. Bailey, J. J. Cupal, and M. D. Winkel, 1997, “On the cost effectiveness of logarithmic arithmetic for backpropagation training on SIMD processors,” in Proceedings of International Conference on Neural Networks (ICNN’97), vol. 2. IEEE, 1997, pp. 933–936. https://ieeexplore.ieee.org/document/616150 (Possibly the earliest paper with consideration of LNS as applied to AI models.)
- Min Soo Kim; Alberto A. Del Barrio; Román Hermida; Nader Bagherzadeh, 2018, “Low-power implementation of Mitchell’s approximate logarithmic multiplication for convolutional neural networks,” in Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, 2018, pp. 617–622. https://ieeexplore.ieee.org/document/8297391 (Use of Mitchell's approximate multiplier in CNNs.)
- Giuseppe C. Calafiore, Stephane Gaubert, Member, Corrado Possieri, 2020, A Universal Approximation Result for Difference of log-sum-exp Neural Networks, https://arxiv.org/abs/1905.08503 (Use of a logarithmic activation function.)
- Giuseppe C. Calafiore, Stephane Gaubert, Corrado Possieri, Log-sum-exp neural networks and posynomial models for convex and log-log-convex data, IEEE Transactions on Neural Networks and Learning Systems, 2019, https://arxiv.org/abs/1806.07850
- U. Lotric and P. Bulic, 2011, “Logarithmic multiplier in hardware implementation of neural networks,” in International Conference on Adaptive and Natural Computing Algorithms. Springer, April 2011, pp. 158–168. https://dl.acm.org/doi/10.5555/1997052.1997071
- HyunJin Kim; Min Soo Kim; Alberto A. Del Barrio; Nader Bagherzadeh, A cost-efficient iterative truncated logarithmic multiplication for convolutional neural networks, 2019, IEEE 26th Symposium on Computer Arithmetic (ARITH), https://ieeexplore.ieee.org/abstract/document/8877474 (Uses logarithmic multiplication algorithm.)
- Gao M, Qu G, 2018, Estimate and recompute: a novel paradigm for approximate computing on data flow graphs. IEEE Trans Comput Aided Des Integr Circuits Syst 39(2):335–345. https://doi.org/10.1109/TCAD.2018.2889662, https://ieeexplore.ieee.org/document/8588387 (Uses LNS as the representation to do approximate arithmetic.)
- H. Kim, M. S. Kim, A. A. Del Barrio, and N. Bagherzadeh, 2019, “A cost-efficient iterative truncated logarithmic multiplication for convolutional neural networks,” in 2019 IEEE 26th Symposium on Computer Arithmetic (ARITH). IEEE, June 2019, pp. 108–111, https://ieeexplore.ieee.org/document/8877474
- Arnold, M.G., 2002, Reduced power consumption for MPEG decoding with LNS, Proceedings of the IEEE International Conference on Application-Specific Systems, Architectures and Processors (ASAP 2002), IEEE Computer Society Press, Los Alamitos (2002) https://ieeexplore.ieee.org/document/1030705 (MPEG signal processing and LNS.)
- E. E. Swartzlander, D. V. S. Chandra, H. T. Nagle and S. A. Starks, 1983, "Sign/logarithm architecture for FFT implementation", IEEE Trans. Comput., vol. C-32, June 1983. https://ieeexplore.ieee.org/document/1676274 (FFT applications of LNS.)
- M. S. Ansari, V. Mrazek, B. F. Cockburn, L. Sekanina, Z. Vasicek, and J. Han, 2019, “Improving the accuracy and hardware efficiency of neural networks using approximate multipliers,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 28, no. 2, pp. 317–328, Oct 2019, https://ieeexplore.ieee.org/document/8863138
- Basetas C., Kouretas I., Paliouras V., 2007, Low-power digital filtering based on the logarithmic number system. International Workshop on Power and Timing Modeling, Optimization and Simulation. Springer, pp 546–555. https://doi.org/10.1007/978-3-540-74442-9_53, https://link.springer.com/chapter/10.1007/978-3-540-74442-9_53 (LNS in signal processing algorithms.)
- Biyanu Zerom, Mohammed Tolba, Huruy Tesfai, Hani Saleh, Mahmoud Al-Qutayri, Thanos Stouraitis, Baker Mohammad, Ghada Alsuhli, 2022, Approximate Logarithmic Multiplier For Convolutional Neural Network Inference With Computational Reuse, 2022 29th IEEE International Conference on Electronics, Circuits and Systems (ICECS), 24-26 October 2022, https://doi.org/10.1109/ICECS202256217.2022.9970861, https://ieeexplore.ieee.org/abstract/document/9970861/
- M. S. Ansari, B. F. Cockburn, and J. Han, 2020, “An improved logarithmic multiplier for energy-efficient neural computing,” IEEE Transactions on Computers, vol. 70, no. 4, pp. 614–625, May 2020. https://ieeexplore.ieee.org/document/9086744
- Tso-Bing Juang; Cong-Yi Lin; Guan-Zhong Lin, 2018, “Area-delay product efficient design for convolutional neural network circuits using logarithmic number systems,” in International SoC Design Conference (ISOCC). IEEE, 2018, pp. 170–171, https://ieeexplore.ieee.org/abstract/document/8649961
- M Arnold, 2023, Machine Learning using Logarithmic Arithmetic with Preconditioned Input to Mitchell's Method, 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), https://ieeexplore.ieee.org/document/10168554
- J. Bernstein, J. Zhao, M. Meister, M. Liu, A. Anandkumar, and Y. Yue, 2020, “Learning compositional functions via multiplicative weight updates,” in Proc. Adv. Neural Inf. Process. Syst. 33: Annu. Conf. Neural Inf. Process. Syst., 2020. https://proceedings.neurips.cc/paper/2020/hash/9a32ef65c42085537062753ec435750f-Abstract.html <
- Mark Arnold; Ed Chester; Corey Johnson, 2020, Training neural nets using only an approximate tableless LNS ALU, 2020 IEEE 31st International Conference on Application-specific Systems, Architectures and Processors (ASAP), DOI: 10.1109/ASAP49362.2020.00020, https://ieeexplore.ieee.org/document/9153225
- J Cai, 2022, Log-or-Trig: Towards efficient learning in deep neural networks, Thesis, Graduate School of Engineering, Tokyo University of Agriculture and Technology, https://tuat.repo.nii.ac.jp/?action=repository_action_common_download&item_id=1994&item_no=1&attribute_id=16&file_no=3, PDF: https://tuat.repo.nii.ac.jp/index.php?action=pages_view_main&active_action=repository_action_common_download&item_id=1994&item_no=1&attribute_id=16&file_no=1&page_id=13&block_id=39
- Yu-Hsiang Huang; Gen-Wei Zhang; Shao-I Chu; Bing-Hong Liu; Chih-Yuan Lien; Su-Wen Huang, 2023, Design of Logarithmic Number System for LSTM, 2023 9th International Conference on Applied System Innovation (ICASI) https://ieeexplore.ieee.org/abstract/document/10179504/
- TY Cheng, Y Masuda, J Chen, J Yu, M Hashimoto, 2020, Logarithm-approximate floating-point multiplier is applicable to power-efficient neural network training, Integration, Volume 74, September 2020, Pages 19-31, https://www.sciencedirect.com/science/article/abs/pii/S0167926019305826 (Has some theory of log-domain operations for LNS; uses bitwidth scaling and logarithmic approximate multiplication.)
- TaiYu Cheng, Jaehoon Yu, M. Hashimoto, July 2019, Minimizing power for neural network training with logarithm-approximate floating-point multiplier, 2019 29th International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS), https://www.semanticscholar.org/paper/Minimizing-Power-for-Neural-Network-Training-with-Cheng-Yu/ab190dd47e4c16949276f98052847d1314d76543
- Mingze Gao; Gang Qu, 2017, Energy efficient runtime approximate computing on data flow graphs, 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), Nov. 2017, pp. 444–449, https://ieeexplore.ieee.org/document/8203811
- T. Cheng, et al., July 2019, Minimizing power for neural network training with logarithm-approximate floating-point multiplier, 2019 29th International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS), DOI:10.1109/PATMOS.2019.8862162, https://www.researchgate.net/publication/336439575_Minimizing_Power_for_Neural_Network_Training_with_Logarithm-Approximate_Floating-Point_Multiplier
- J Xu, Y Huan, LR Zheng, Z Zou, 2018, A low-power arithmetic element for multi-base logarithmic computation on deep neural networks, 2018 31st IEEE International System-on-Chip Conference (SOCC), https://ieeexplore.ieee.org/document/8618560
- MA Qureshi, A Munir, 2020, NeuroMAX: a high throughput, multi-threaded, log-based accelerator for convolutional neural networks, 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD), https://ieeexplore.ieee.org/document/9256558, PDF: https://dl.acm.org/doi/pdf/10.1145/3400302.3415638
- Min Soo Kim, 2020, Cost-Efficient Approximate Log Multipliers for Convolutional Neural Networks, Ph.D. thesis, Electrical and Computer Engineering, University of California, Irvine, https://search.proquest.com/openview/46b6f28a9f1e4013a01f128c36753d83/1?pq-origsite=gscholar&cbl=18750&diss=y, PDF: https://escholarship.org/content/qt3w4980x3/qt3w4980x3.pdf (Examines multiple approximate log multipliers and their effect on model accuracy.)
- G. Anushaa, K. C. Sekharb, B S Sridevic, Nukella Venkateshd, 2023, The Journey of Logarithm Multiplier Approach, Development and Future Scope, Recent Developments in Electronics and Communication Systems, https://www.researchgate.net/publication/367067187_The_Journey_of_Logarithm_Multiplier_Approach_Development_and_Future_Scope
- M. Christ, F. De Dinechin and F. Pétrot, 2022, Low-precision logarithmic arithmetic for neural network accelerators, 2022 IEEE 33rd International Conference on Application-specific Systems, Architectures and Processors (ASAP), Gothenburg, Sweden, 2022, pp. 72-79, doi: 10.1109/ASAP54787.2022.00021, https://ieeexplore.ieee.org/abstract/document/9912091 (Using quantized logarithmic number system values.)
- M. Arnold, 2023, Machine Learning using Logarithmic Arithmetic with Preconditioned Input to Mitchell's Method, 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hangzhou, China, 2023, pp. 1-5, doi: 10.1109/AICAS57966.2023.10168554, https://ieeexplore.ieee.org/abstract/document/10168554
- Daisuke Miyashita, Edward H. Lee, Boris Murmann, 17 Mar 2016 (v2), Convolutional Neural Networks using Logarithmic Data Representation, https://arxiv.org/abs/1603.01025
- Magombe Yasin, Mehmet Sarıgül, Mutlu Avci, 2024, Logarithmic Learning Differential Convolutional Neural Network, Neural Networks, Volume 172, 106114, ISSN 0893-6080, https://doi.org/10.1016/j.neunet.2024.106114 https://www.sciencedirect.com/science/article/abs/pii/S0893608024000285
- E. H. Lee, D. Miyashita, E. Chai, B. Murmann and S. S. Wong, 2017, LogNet: Energy-efficient neural networks using logarithmic computation, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 2017, pp. 5900-5904, doi: 10.1109/ICASSP.2017.7953288, https://ieeexplore.ieee.org/document/7953288
- William James Dally, Rangharajan VENKATESAN, Brucek Kurdo Khailany, 2019, Neural network accelerator using logarithmic-based arithmetic, Nvidia Corp, US11886980B2, https://patents.google.com/patent/US11886980B2/en
- Wang, Z., Xu, Z., He, D. et al, 2021, Deep logarithmic neural network for Internet intrusion detection. Soft Comput 25, 10129–10152 (2021). https://doi.org/10.1007/s00500-021-05987-9 https://link.springer.com/article/10.1007/s00500-021-05987-9
- Wenhui Zhang, Xinkuang Geng, Qin Wang, Jie Han, Honglan Jiang, 12 June 2024, A Low-Power and High-Accuracy Approximate Adder for Logarithmic Number System, GLSVLSI '24: Proceedings of the Great Lakes Symposium on VLSI 2024, June 2024, Pages 125–131, https://doi.org/10.1145/3649476.3658706 https://dl.acm.org/doi/abs/10.1145/3649476.3658706
- David Spuler, March 2024, Chapter 52. Logarithmic Models, Generative AI in C++: Coding Transformers and LLMs, https://www.amazon.com/dp/B0CXJKCWX9
- L. Sommer, L. Weber, M. Kumm and A. Koch, 2020, Comparison of Arithmetic Number Formats for Inference in Sum-Product Networks on FPGAs, 2020 IEEE 28th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), Fayetteville, AR, USA, 2020, pp. 75-83, doi: 10.1109/FCCM48280.2020.00020, https://ieeexplore.ieee.org/document/9114810
- William James Dally, Rangharajan VENKATESAN, Brucek Kurdo Khailany, Dec 2023, Nvidia Corp, US20240112007A1, Neural network accelerator using logarithmic-based arithmetic, https://patents.google.com/patent/US20240112007A1/en
LNS Hardware Acceleration
Papers on the use of the LNS in hardware-accelerated implementations include:
- Manik Chugh; Behrooz Parhami, 2013, Logarithmic Arithmetic as an Alternative to Floating-Point: a Review, Proc. 47th Asilomar Conf. Signals, Systems, and Computers (November 2013), https://ieeexplore.ieee.org/document/6810472, PDF: https://web.ece.ucsb.edu/~parhami/pubs_folder/parh13-asilo-log-arith-as-alt-to-flp.pdf (A survey paper covering the use of LNS in custom accelerated hardware implementations.)
- F.J. Taylor, 1983, An Extended Precision Logarithmic Number System, IEEE Trans. Acoustics, Speech, and Signal Processing (1983), https://ieeexplore.ieee.org/document/910929
- Parhami B., 2020, Computing with logarithmic number system arithmetic: Implementation methods and performance benefits. Comput Electr Eng 87:106800. https://doi.org/10.1016/j.compeleceng.2020.106800, https://www.sciencedirect.com/science/article/abs/pii/S0045790620306534
- Gautschi M, Schaffner M, Gürkaynak FK, Benini L, 2016. "4.6 A 65nm CMOS 6.4-to-29.2 pJ/FLOP@ 0.8 V shared logarithmic floating point unit for acceleration of nonlinear function kernels in a tightly coupled processor cluster". 2016 IEEE International Solid-State Circuits Conference (ISSCC), 2016. IEEE, pp 82–83. https://doi.org/10.1109/ISSCC.2016.7417917, https://ieeexplore.ieee.org/document/7417917
- Coleman JN, Softley CI, Kadlec J, Matousek R, Tichy M, Pohl Z, Hermanek A, Benschop NF, 2008, The European logarithmic microprocesor. IEEE Trans Comput 57(4):532–546. https://doi.org/10.1109/TC.2007.70791 https://ieeexplore.ieee.org/document/4358243 (A European project for LNS in hardware called the European logarithmic microprocessor or ELM.)
- Coleman JN, Chester E, Softley CI, Kadlec J, 2000, Arithmetic on the European logarithmic microprocessor. IEEE Trans Comput 49(7):702–715. https://doi.org/10.1109/12.863040, https://ieeexplore.ieee.org/document/863040 (More about the European project for LNS in hardware.)
- S. Huang, L.-G. Chen and T.-H. Chen, 1994, "The chip design of a 32-b Logarithmic Number System", Proc. of ISCAS94, May 1994, https://ieeexplore.ieee.org/document/409224, PDF: http://ntur.lib.ntu.edu.tw/bitstream/246246/2007041910032469/1/00409224.pdf (Theory of a chip design for 32-bits LNS.)
- D. Lewis and L. Yu, 1989, "Algorithm design for a 30 bit integrated logarithmic processor", Proc. of 9th Symp. on Computer Arithmetic, pp. 192-199, 1989. https://ieeexplore.ieee.org/document/72826 (30-bit LNS hardware.)
- T. Stouraitis and F. Taylor, "Analysis of Logarithmic Number System processors", 1988, IEEE Transactions on Circuits and Systems, vol. 35, pp. 519-527, May 1988. https://ieeexplore.ieee.org/document/1779
- T. Stouraitis, S. Natarajan and F. Taylor, 1985, "A reconfiguration systolic primitive processor for signal processing", IEEE Int. Conf. on ASSP, March 1985, https://ieeexplore.ieee.org/document/1168508
- Implementation of Four Common Functions on an LNS CoProcessor, Krishnendu Mukhopadhyaya, 1995, IEEE Transactions on Computers, https://ieeexplore.ieee.org/document/367997, PDF: https://www.isical.ac.in/~krishnendu/LNS-IEEE-TC.pdf
- Durgesh Nandan; Jitendra Kanungo; Anurag Mahajan, 2017, An efficient VLSI architecture for iterative logarithmic multiplier, 2017 4th International Conference on Signal Processing and Integrated Networks (SPIN), February 2017, https://ieeexplore.ieee.org/document/8049986 (Uses LNS and Mitchell's approximate multiplication algorithm.)
- Durgesh Nandan, Jitendra Kanungo, Anurag Mahajan, 2017, An Efficient VLSI Architecture Design for Logarithmic Multiplication by Using the Improved Operand Decomposition, In: Integration, Volume 58, June 2017, Pages 134-141, https://doi.org/10.1016/j.vlsi.2017.02.003, https://www.sciencedirect.com/science/article/abs/pii/S0167926017300895 (Uses LNS and Mitchell's approximate multiplication algorithm.)
- Siti Zarina Md Naziri; Rizalafande Che Ismail; Ali Yeon Md Shakaff, 2014, The Design Revolution of Logarithmic Number System Architecture, 2014 2nd International Conference on Electrical, Electronics and System Engineering (ICEESE), DOI: 10.1109/ICEESE.2014.7154603, https://doi.org/10.1109/ICEESE.2014.7154603, https://ieeexplore.ieee.org/document/7154603
- J. N. Coleman and E. I. Chester, 1999, “A 32-Bit Logarithmic Arithmetic Unit and Its Performance Compared to Floating-Point,” Proc. 14th IEEE Symp. Computer Arithmetic, 1999, pp. 142-151, https://ieeexplore.ieee.org/document/762839 (32-bit arithmetic in an early European project for LNS in hardware.)
- F. J. Taylor, R. Gill, J. Joseph, and J. Radke, 1988, “A 20 Bit Logarithmic Number System Processor,” IEEE Trans. Computers, Vol. 37, pp. 190-200, 1988. https://ieeexplore.ieee.org/document/2148 (A 1988 hardware 20-bit version of logarithmic numbers.)
- J. N. Coleman, C. I. Softley, J. Kadlec, R. Matousek, M. Licko, Z. Pohl, and A. Hermanek, “Performance of the European Logarithmic Microprocessor,” Proc. SPIE Annual Meeting, 2003, pp. 607-617. https://www.semanticscholar.org/paper/Performance-of-the-European-logarithmic-Coleman-Softley/7a324cd01bd1f4a25d70dfe6875474c9b92a3d9c
- Haohuan Fu; Oskar Mencer; Wayne Luk, 2006, Comparing Floating-Point and Logarithmic Number Representations for Reconfigurable Acceleration, 2006 IEEE International Conference on Field Programmable Technology, https://ieeexplore.ieee.org/document/4042464 (Evaluates LNS vs floating-point for FPGAs.)
- J.N. Coleman; C.I. Softley; J. Kadlec; R. Matousek; M. Licko; Z. Pohl; A. Hermanek, 2001, The European Logarithmic Microprocessor - a QR RLS application, Engineering, Computer Science Conference Record of Thirty-Fifth Asilomar… 2001 https://ieeexplore.ieee.org/document/986897
- H. Kim; B.-G. Nam; J.-H. Sohn; J.-H. Woo; H.-J. Yoo, 2006, A 231-MHz, 2.18-mW 32-bit Logarithmic Arithmetic Unit for Fixed-Point 3-D Graphics System IEEE J. Solid-State Circuits (Volume 41, Issue 11, November 2006) https://ieeexplore.ieee.org/document/1717660
- T. Stouraitis, "A hybrid floating-point/logarithmic number system digital signal processor", Int. Conf. Acoust. Speech Signal Process., 1989. https://ieeexplore.ieee.org/document/266619
- I. Kouretas and V. Paliouras, “Logarithmic number system for deep learning,” in International Conference on Modern Circuits and Systems Technologies (MOCAST). IEEE, 2018, pp. 1–4, https://ieeexplore.ieee.org/abstract/document/8376572
- J. H. Lang, C. A. Zukowski, R. O. LaMaire, and C. H. An, 1985, “Integrated-Circuit Logarithmic Units,” IEEE Trans. Computers, Vol. 34, pp. 475-483, 1985. https://ieeexplore.ieee.org/document/1676588 (Hardware version of logarithmic numbers from 1985.)
- D. Yu and D. M. Lewis, 1991, “A 30-b Integrated Logarithmic Number System Processor,” IEEE J. Solid-State Circuits, Vol. 26, pp. 1433-1440, 1991. https://www.scribd.com/document/41667733/A-30-b-Integrated-Logarithmic-Number-System-Processor-91 (An early 1991 hardware version of LNS with 30-bits.)
- V. Paliouras, J. Karagiannis, G. Aggouras, and T. Stouraitis, 1998, “A Very-Long Instruction Word Digital Signal Processor Based on the Logarithmic Number System,” Proc. 5th IEEE Int’l Conf. Electronics, Circuits and Systems, Vol. 3, pp. 59-62, 1998. https://ieeexplore.ieee.org/document/813936 (A hardware version of LNS from 1998.)
- M. G. Arnold, 2003, “A VLIW Architecture for Logarithmic Arithmetic,” Proc. Euromicro Symp. Digital System Design, 2003, pp. 294-302. https://ieeexplore.ieee.org/document/1231957?arnumber=1231957 (A hardware version of LNS in 2003 using Very Long Instruction Word (VLIW).)
- Rizalafande Che Ismail, Sep 2012, Fast, area-efficient 32-bit LNS for computer arithmetic operations, Ph.D. Thesis, Newcastle University, https://theses.ncl.ac.uk/jspui/handle/10443/1702, PDF: https://theses.ncl.ac.uk/jspui/bitstream/10443/1702/1/Che%20Ismail%2012.pdf
- M.G. Arnold, T.A. Bailey, J.R. Cowles and JJ. Cupal, Redundant Logarithmic Arithmetic, IEEE Trans Computers, vol. 39, No. 8, pp. 1077-1086, 1990 https://ieeexplore.ieee.org/abstract/document/57046
- Joshua Yung Lih Low; Ching Chuen Jong, 2017, Range Mapping—A Fresh Approach to High Accuracy Mitchell-Based Logarithmic Conversion Circuit Design, IEEE Transactions on Circuits and Systems I: Regular Papers ( Volume 65, Issue 1, January 2018) https://ieeexplore.ieee.org/abstract/document/7968344/
- D.M. Lewis, "114 MFLOPS Logarithmic Number System Arithmetic Unit for DSP Applications", IEEE J. Solid-State Circuits, vol. 30, pp 1547-1553,1995 https://ieeexplore.ieee.org/document/482205
- D. M. Lewis, Interleaved memory function interpolators with application to an accurate LNS arithmetic unit, IEEE Trans. Computers, Vol. 43, No. 8, pp.974-982, 1994. https://ieeexplore.ieee.org/document/295859
- P Lee, E Costa, S McBader… - … on Neural Networks, 2007, LogTOTEM: A logarithmic neural processor and its implementation on an FPGA fabric, 2007 International Joint Conference on Neural Networks, https://ieeexplore.ieee.org/abstract/document/4371396/, PDF: https://www.academia.edu/download/46203834/LogTOTEM_A_Logarithmic_Neural_Processor_20160603-13176-1fohbpz.pdf
- Peter Lee, 2007, A VLSI implementation of a digital hybrid-LNS neuron, 2007 International Symposium on Integrated Circuits, https://ieeexplore.ieee.org/document/4441783, PDF: https://www.researchgate.net/profile/Peter-Lee-48/publication/4315642_A_VLSI_implementation_of_a_digital_hybrid-LNS_neuron/links/0c96051dd594f5a004000000/A-VLSI-implementation-of-a-digital-hybrid-LNS-neuron.pdf
- Pramod Kumar Meher and Thanos Stouraitis (editors), 15 September 2017. Arithmetic Circuits for DSP Applications, https://www.amazon.com/Arithmetic-Circuits-Applications-Pramod-Kumar/dp/1119206774/
- R.C Ismail; M.K Zakaria; S.A.Z Murad, 2013, “Hybrid logarithmic number system arithmetic unit: A review,” in IEEE ICCAS, Sept 2013, pp. 55–58. https://ieeexplore.ieee.org/document/6671617
- Haohuan Fu; Oskar Mencer; Wayne Luk, 2007, Optimizing logarithmic arithmetic on FPGAs, 15th Annual IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM 2007), https://ieeexplore.ieee.org/abstract/document/4297253/, PDF: https://spiral.imperial.ac.uk/bitstream/10044/1/5934/1/optlns.pdf
- Barry Lee & Neil Burgess, 2003, A dual-path logarithmic number system addition/subtraction scheme for FPGA, Springer, International Conference on Field Programmable Logic and Applications, FPL 2003: Field Programmable Logic and Application, pp. 808–817, https://link.springer.com/chapter/10.1007/978-3-540-45234-8_78
- G Anusha, KC Sekhar, BS Sridevi, 2023, The Journey of Logarithm Multiplier: Approach, Development and Future Scope, In: Recent Developments in Electronics and Communication Systems, KVS Ramachandra Murthy et al. (Eds.) IOS Press, https://ebooks.iospress.nl/pdf/doi/10.3233/ATDE221243, https://www.researchgate.net/publication/367067187_The_Journey_of_Logarithm_Multiplier_Approach_Development_and_Future_ScopeF
- B Zerom, M Tolba, H Tesfai, H Saleh, 2022, Approximate Logarithmic Multiplier For Convolutional Neural Network Inference With Computational Reuse, 2022 29th IEEE International Conference on Electronics, Circuits and Systems (ICECS), https://ieeexplore.ieee.org/document/9970861 (Combines the Logarithmic Number System, Mitchell's approximate multiplication algorithm, and data reuse strategies to speed up MAC operations.)
LNS Mathematical and Algorithmic Theory
Papers on the mathematical basis of the Logarithmic Number System (LNS) and its applied algorithms in theory include:
- Behrooz Parhami, Computer Arithmetic: Algorithms and Hardware Designs, 2010, Oxford University Press, New York, NY, https://web.ece.ucsb.edu/~parhami/text_comp_arit.htm, https://books.google.com.au/books/about/Computer_Arithmetic.html?id=tEo_AQAAIAAJ&redir_esc=y
- Molahosseini AS, De Sousa LS, Chang C-H, 2017, Embedded systems design with special arithmetic and number systems. Springer. Book on Amazon: https://www.amazon.com/Embedded-Systems-Design-Special-Arithmetic-ebook/dp/B06XRVG3YF/, https://doi.org/10.1007/978-3-319-49742-6, https://link.springer.com/book/10.1007/978-3-319-49742-6 (A text that contains multiple papers on LNS and RNS.)
- B. Parhami, “Computing with logarithmic number system arithmetic: Implementation methods and performance benefits,” Computers & Electrical Engineering, vol. 87, p. 106800, 2020. https://www.sciencedirect.com/science/article/abs/pii/S0045790620306534
- Arnold, M.G., Bailey, T.A., Cowles, J.R., Winkel, M.D.: Applying features of the IEEE 754 to sign/logarithm arithmetic. IEEE Transactions on Computers 41, 1040–1050 (1992) https://ieeexplore.ieee.org/document/156547
- Paliouras, V., Stouraitis, T., 2001, Low-power properties of the Logarithmic Number System. Proceedings of 15th Symposium on Computer Arithmetic (ARITH15), Vail, CO, June 2001, pp. 229–236 (2001) https://ieeexplore.ieee.org/document/930124
- Paliouras, V., Stouraitis, T., 2000, Logarithmic number system for low-power arithmetic. In: Soudris, D.J., Pirsch, P., Barke, E. (eds.) PATMOS 2000. LNCS, vol. 1918, pp. 285–294. Springer, Heidelberg (2000), https://link.springer.com/chapter/10.1007/3-540-45373-3_30
- T. Stouraitis, Logarithmic Number System: Theory analysis and design, 1986, University of Florida, Ph.D. dissertation, University of Florida ProQuest Dissertations Publishing, 1986. 8704221 https://www.proquest.com/openview/0f48dddc19ec62058062ae1b32ee981d/1, https://openlibrary.org/books/OL25923701M/Logarithmic_number_system_theory_analysis_and_design
- F. J. Taylor, "A hybrid floating-point logarithmic number system processor", IEEE Trans. Circuits Syst., vol. CAS-32, pp. 92-95, Jan. 1985. https://ieeexplore.ieee.org/abstract/document/1085588
- M. L. Frey and F. J. Taylor, "A table reduction technique for logarithmically architected digital filters", IEEE Trans. Acoust Speech Signal Processing, vol. ASSP-33, pp. 718-719, June 1985. https://ieeexplore.ieee.org/document/1164597
- E. E. Swartzlander, D. V. S. Chandra, H. T. Nagle and S. A. Starks, 1983, "Sign/logarithm arithmetic for FFT implementation", IEEE Trans. Comput., vol. C-32, pp. 526-534, June 1983. https://ieeexplore.ieee.org/document/1676274
- G. L. Sicuranza, 1983, "On efficient implementations of 2-D digital filters using logarithmic number systems", IEEE Trans. Acoust. Speech Signal Processing, vol. ASSP-31, pp. 877-885, Aug. 1983. https://ieeexplore.ieee.org/document/1164149 (Algorithms for LNS arithmetic.)
- M. L. Frey and F. J. Taylor, 1985, "A table reduction technique for logarithmically architected digital filters", IEEE Trans. Acoust. Speech Signal Processing, vol. ASSP-33, pp. 719-719, June 1985. https://ieeexplore.ieee.org/document/1164597 (Reducing lookup table sizes for LNS.)
- H. Fu, O. Mencer and W. Luk, "FPGA Designs with Optimized Logarithmic Arithmetic", IEEE Trans. Computers, vol. 59, no. 7, pp. 1000-1006, July 2010. https://ieeexplore.ieee.org/document/5416693 (LNS on FPGAs.)
- Chih-Wei Liu; Shih-Hao Ou; Kuo-Chiang Chang; Tzung-Ching Lin; Shin-Kai Chen, 2016, A Low-Error, Cost-Efficient Design Procedure for Evaluating Logarithms to Be Used in a Logarithmic Arithmetic Processor IEEE Trans. Computers (April 2016) https://ieeexplore.ieee.org/document/7118135 (Algorithms for the initial logarithmic conversion from a floating point into an LNS representation.)
- H. L. Garner, “Number Systems and Arithmetic,” in Advances in Computers, Vol. 6, F. L. Alt and M. Rubinoff (eds.), Academic Press, 1965. https://www.sciencedirect.com/science/article/abs/pii/S0065245808604209
- N. G. Kingsbury and P. J. W. Rayner, “Digital Filtering Using Logarithmic Arithmetic,” Electronics Letters, Vol. 7, pp. 56-58, 1971. https://digital-library.theiet.org/content/journals/10.1049/el_19710039 (Early paper on logarithmic numbers.)
- Tso-Bing Juang, Pramod Kumar Meher and Kai-Shiang Jan, 2011, “High-Performance Logarithmic Converters Using Novel Two-Region Bit-Level Manipulation Schemes,” Proc. of VLSI-DAT (VLSI Symposium on Design, Automation, and Testing), pp. 390-393, April 2011. https://ieeexplore.ieee.org/document/5783555
- Tso-Bing Juang, Han-Lung Kuo and Kai-Shiang Jan, 2016, “Lower-Error and Area-Efficient Antilogarithmic Converters with Bit-Correction Schemes,” Journal of the Chinese Institute of Engineers, Vol. 39, No. 1, pp. 57-63, Jan. 2016. https://www.tandfonline.com/doi/abs/10.1080/02533839.2015.1070692?journalCode=tcie20
- Ying Wu, Chuangtao Chen, Weihua Xiao, Xuan Wang, Chenyi Wen, Jie Han, Xunzhao Yin, Weikang Qian, Cheng Zhuo, "A Survey on Approximate Multiplier Designs for Energy Efficiency: From Algorithms to Circuits", ACM Transactions on Design Automation of Electronic Systems, 2023. https://doi.org/10.1145/3610291, https://arxiv.org/abs/2301.12181 (Extensive survey of many approximate multiplication algorithms.)
- Patrick Robertson, Emmanuelle Villebrun, Peter Hoeher, et al., “A comparison of optimal and sub-optimal map decoding algorithms operating in the log domain,” in IEEE International Conference on Communications, 1995. https://ieeexplore.ieee.org/document/524253
- Mark G. Arnold, LNS References, 2014, XLNS Research, http://www.xlnsresearch.com/home.htm (An exhaustive list of LNS research articles up to around 2014.)
- N. G. Kingsbury and P .J. W. Rayner, Digital Filtering Using Logarithmic Arithmetic, Electronics Letters, 7, pp 56-58, 1971, https://www.infona.pl/resource/bwmeta1.element.ieee-art-000004235144
- F. Albu; J. Kadlec; N. Coleman; A. Fagan, 2002, The Gauss-Seidel fast affine projection algorithm, IEEE Workshop on Signal Processing Systems, https://ieeexplore.ieee.org/abstract/document/1049694/, PDF: https://www.academia.edu/download/32934948/sips2002.pdf (Simplistic coverage of LNS addition with just exponentiation.)
- Thanh Son Nguyen, Alexey Solovyev, Ganesh Gopalakrishnan, 30 Jan 2024, Rigorous Error Analysis for Logarithmic Number Systems, https://arxiv.org/abs/2401.17184
- Vincenzo Liguori, 9 Jun 2024, Procrastination Is All You Need: Exponent Indexed Accumulators for Floating Point, Posits and Logarithmic Numbers, https://arxiv.org/abs/2406.05866
- Wenhui Zhang, Xinkuang Geng, Qin Wang, Jie Han, Honglan Jiang, 12 June 2024, A Low-Power and High-Accuracy Approximate Adder for Logarithmic Number System, GLSVLSI '24: Proceedings of the Great Lakes Symposium on VLSI 2024, June 2024, Pages 125–131, https://doi.org/10.1145/3649476.3658706 https://dl.acm.org/doi/abs/10.1145/3649476.3658706
- David Spuler, March 2024, Chapter 52. Logarithmic Models, Generative AI in C++: Coding Transformers and LLMs, https://www.amazon.com/dp/B0CXJKCWX9
- L. Sommer, L. Weber, M. Kumm and A. Koch, 2020, Comparison of Arithmetic Number Formats for Inference in Sum-Product Networks on FPGAs, 2020 IEEE 28th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), Fayetteville, AR, USA, 2020, pp. 75-83, doi: 10.1109/FCCM48280.2020.00020, https://ieeexplore.ieee.org/document/9114810
- William James Dally, Rangharajan VENKATESAN, Brucek Kurdo Khailany, Dec 2023, Nvidia Corp, US20240112007A1, Neural network accelerator using logarithmic-based arithmetic, https://patents.google.com/patent/US20240112007A1/en
Logarithmic Algebra
Papers looking at the mathematical theory of logarithms.
- JK Lee, L Mukhanov, AS Molahosseini, 2023, Resource-Efficient Convolutional Networks: A Survey on Model-, Arithmetic-, and Implementation-Level Techniques, https://dl.acm.org/doi/abs/10.1145/3587095, PDF: https://dl.acm.org/doi/pdf/10.1145/3587095
- Medium User, What's the formula to solve summation of logarithms?, Medium, https://math.stackexchange.com/questions/589027/whats-the-formula-to-solve-summation-of-logarithms
- Chris Smith, The Logarithm of a Sum, Mar 9, 2021, https://cdsmithus.medium.com/the-logarithm-of-a-sum-69dd76199790
- Daniel E Loeb, The Interated Logarithmic Algebra, Advances in Mathematics, Volume 86, Issue 2, April 1991, Pages 155-234, https://doi.org/10.1016/0001-8708(91)90041-5
- YZ Huang, J Lepowsky, L Zhang, A logarithmic generalization of tensor product theory for modules for a vertex operator algebra, International Journal of Mathematics, Vol. 17, No. 08, pp. 975-1012 (2006), https://doi.org/10.1142/S0129167X06003758, https://www.worldscientific.com/doi/abs/10.1142/S0129167X06003758
- Daniel E Loeb, The Iterated Logarithmic Algebra. II. Sheffer sequences, Journal of Mathematical Analysis and Applications, Volume 156, Issue 1, 15 March 1991, Pages 172-183, https://doi.org/10.1016/0022-247X(91)90389-H
- Logarithmic tensor product theory for generalized modules for a conformal vertex algebra, Yi-Zhi Huang, James Lepowsky, Lin Zhang, Oct 2007, https://arxiv.org/abs/0710.2687
- Wikipedia, List of logarithmic identities, https://en.wikipedia.org/wiki/List_of_logarithmic_identities
- Wikipedia, Logarithmic number system https://en.wikipedia.org/wiki/Logarithmic_number_system
LNS Extensions
If you scare easily, might want to look away... but there's an extension of the LNS that's called the "Multi-Dimensional Logarithmic Number System" (MDLNS). Its theory is based on the "Multiple-Base Number System" (MBNS). MDLNS and MBNS have both found some applications in digital signal processing. Some papers include:
- Vassil Dimitrov, Graham Jullien, Roberto Muscedere, Multiple-Base Number System: Theory and Applications (Circuits and Electrical Engineering Book 2), Part of: Circuits and Electrical Engineering (2 books), Jan 24, 2012 https://www.amazon.com/Multiple-Base-Number-System-Applications-Engineering-ebook/dp/B00847CSAG/ (General book with a section on MDLNS.)
- V. S. Dimitrov, J. Eskritt, L. Imbert, G. A. Jullien, and W. C. Miller, 2001, “The use of the multi-dimensional logarithmic number system in DSP applications,” in Proc. 15th IEEE Symp. Comput. Arith., Vail, CO, USA, Jun. 2001, pp. 247–254, https://ieeexplore.ieee.org/document/930126
- Vassil S. Dimitrov, Graham A. Jullien, Konrad Walus, 2002, Digital filtering using the multidimensional logarithmic number system, Proceedings Volume 4791, Advanced Signal Processing Algorithms, Architectures, and Implementations XII; (2002) https://doi.org/10.1117/12.452047
- H. Li; G.A. Jullien; V.S. Dimitrov; M. Ahmadi; W. Miller, 2002, A 2-digit multidimensional logarithmic number system filterbank for a digital hearing aid architecture 2002 IEEE International Symposium on Circuits and Systems. Proceedings (Cat. No.02CH37353), https://ieeexplore.ieee.org/abstract/document/1011464
- R. Muscedere, V. S. Dimitrov, G. A. Jullien, and W. C. Miller. Efficient conversion from binary to multi-digit multi-dimensional logarithmic number systems using arrays of range addressable look-up tables. Proc. 21st IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP), pages 130-138, 2002. https://ieeexplore.ieee.org/document/1030711
- Leila Sepahi, 2012, Improved MDLNS Number System Addition and Subtraction by Use of the Novel Co-Transformation, Masters Thesis, University of Windsor, https://scholar.uwindsor.ca/cgi/viewcontent.cgi?article=1139&context=etd
- J.-M. Muller, A. Scherbyna and A. Tisserand, 1998 Semi-Logarithmic Number Systems, IEEE Trans. Computers, vol. 47, No. 2, pp. 145-151, 1998, https://ieeexplore.ieee.org/document/663760 PDF: https://perso.ens-lyon.fr/jean-michel.muller/IEEETC-Fev98.pdf
- R Muscedere, 2003, Difficult operations in the multi-dimensional logarithmic number system. Ph.D. Thesis, Electrical and Computer Engineering, University of Windsor, https://scholar.uwindsor.ca/cgi/viewcontent.cgi?article=2741&context=etd
- J. Eskritt, R. Muscedere, G.AJullien, V.S.Dimitrov and W.C.Miller, A 2-digit DBNS filter architecture, IEEE Workshop on Signal Processing, Louisiana, Oct. 2000, https://ieeexplore.ieee.org/document/886743
- V.S. Dimitrov, G.A. Jullien and W.C. Miller, Theory and applications of the double-base number system, IEEE Trans. an Computers, vol. 48, No. IO, pp. 1098-1 106, Oct. 1999, https://ieeexplore.ieee.org/document/805158
- V.S. Dimitrov, S. Sadeghi-Emamchaie, G.A. Jullien and W.C. Miller, A near canonic double-base number system with applications in DSP, SPlE Conjerence on Signal Processing Algorithms, vol. 2846, pp.14-25. 1996, https://doi.org/10.1117/12.255433
- G. A. Jullien, V. S. Dimitrov, B. Li, W. C. Miller, A..Lee, and M. Ahmadi, 1999, A Hybrid DBNS Processor for DSP, Computation, Proc. Int. IEEE Symp. Circuits and Systems, Orlando, https://www.researchgate.net/publication/221381797_A_hybrid_DBNS_processor_for_DSP_computation
- Ewe, Chun Te, 2009, A new number representation for hardware implementation of DSP algorithms, Ph.D. thesis, Imperial College London, https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.501468 (Dual fixed-point number system; has some LNS content.)
- Vassil S. Dimitrov, Graham A. Jullien, Konrad Walus, 2002, Digital filtering using the multidimensional logarithmic number system, Proceedings Volume 4791, Advanced Signal Processing Algorithms, Architectures, and Implementations XII; (2002), International Symposium on Optical Science and Technology, 2002, Seattle, WA, United States, https://doi.org/10.1117/12.452047
Some Other Weird Non-Multiplication Alternatives
The use of logarithms is not the only way that researchers have considered to get rid of all those multiplication computations. The main attempts involve either addition, bitwise shifting, or both, but there are obscure attempts using max/min and even trigonometric functions.
Here are some other non-multiplication research areas:
- Zero-multiplication models
- Addernets
- Add-as-integer networks
- Low-bit quantization (binary, ternary or 2-bit quant)
- Max-Plus Networks
More AI Research
Read more about:
- Advanced AI Mathematics
- Zero-Multiplication Models
- Matrix Algebra
- Approximate Computing
- Inference Optimizations
- « Research Home