Aussie AI

52. Logarithmic Models

  • Book Excerpt from "Generative AI in C++"
  • by David Spuler, Ph.D.

“You’re gonna work harder than you’ve ever worked before.”

Stand and Deliver, 1988.

 

 

Logarithms use high-school mathematics to change multiplications into additions. And this is an interesting idea, given all the shade that's been thrown on the expensive multiplication operators in neural networks, even with all this hardware acceleration going on.

A few people have thought of this already, and the amount of literature is beyond extensive. There was much theory in the 1970s and 1980s, and some real hardware implementations in the 1990s and 2000s. There's also been a lot of more recent research on this area in the 2010s and 2020s.

Logarithmic Number System The way that logarithmic models work in the log-domain is to use the Logarithmic Number System (LNS). The LNS is a numeric representation that uses logarithms, but isn't the standard mathematical logarithmic arithmetic. It has been considered for use with neural networks as far back as 1997. In the LNS, multiplication changes to a fast additive operation, but addition itself becomes expensive. Thus, computation of vector dot products or multiply-add operations in machine learning are problematic, with various theoretical attempts to overcome the difficulties (with addition operators) researched in the literature.

LNS is not the only unusual theoretical number system available. In addition to the simple floating-point and fixed-point representations, LNS should be compared to other complicated number systems considered for machine learning, including the Residue Number System (RNS), Posit number system, and Dyadic numbers (see Chapter 55 on advanced mathematics).

What is a Logarithmic Model?

The basic idea is that we change everything to the “log-domain” rather than normal numbers. Instead of a probability X, we store log-of-X and use that in every calculation. If we can do faster arithmetic through the whole model using log-X instead of X, then we can un-log them at the end, back into the normal number domain, called the “linear domain.” Conversion from log-domain back to linear domain is just the expf exponential function, just as the initial conversion from linear to log domain is just the logf function.

The basic mathematical reason why this idea of staying in the log-domain might work well is that logarithms have this property:

        log (x * y) = log(x) + log(y)

So, if we are doing a vector dot product, and we have log-X and log-Y available (already stored in our log-domain numbers), then we can “multiply” the two numbers together in the log-domain with just addition. Adding log-X and log-Y gives us “log-X*Y” in log-domain. Our

Unfortunately, the same is not true for addition of log-X and log-Y in the log-domain, since:

        log (x + y) != log(x) + log(y)

Instead, the addition operation in the log domain is a little slower:

        log (x + y) = log(exp(log(x)) + exp(log(y)))

Summing exponentials is not super-fast. This is the issue that creates problems for the logarithmic model idea. To do a vector dot product computation we first multiply, but then we have to add. Ironically, addition becomes the bottleneck problem in the log-domain.

End-to-End Logarithmic Models

A pure logarithmic model is one that maintains its calculations using the Logarithmic Number System. Alsuhli et al. (2023) refers to this approach as an “end-to-end” LNS model, which means performing all calculations the “log-domain” (i.e. working on logarithms of values, rather than the original values).

The idea is basically to change every multiplication by a weight into an addition, and any division into a subtraction. Instead of weights, the logarithm of a weight is stored and used throughout the layers. A full implementation of this end-to-end idea requires not just arithmetic changes, but also changes to the various Transformer components such as normalization, Softmax, and so on.

Note that there are several other ways to use logarithms in AI engines and the LNS is not the same as:

  • Logarithmic bitshift quantization
  • Approximate multiplication arithmetic with logarithms
  • Advanced number systems: Dyadic numbers, multi-base numbers, etc.

LNS models are not an approximation. The idea for logarithmic numbers is not an approximation, but exact computations. The calculations occur in the “log-domain” but are intended to represent the full original calculations in the original linear domain. The aim is to convert to logs at the start and then un-convert back to the original numbers, with the same results, but faster. In practice, the precision may be somewhat lower because the log-domain is much more contracted than the linear-domain, so some low-order fractional digits may be lost. Hence, the method may be somewhat approximate in that sense, although its goal is exactness.

Intermediate computations should also be stored as a logarithmic value, such as embeddings or probabilities, so that both sides of a MatMul are logarithmic, allowing addition to be used instead of arithmetic multiplication operations. This requires adjustments to other Transformer architectural components, such as normalization and Softmax.

Theoretically, it should be workable once everything is changed to log-domain. However, practical problems arise because MatMul and vector dot product also require addition operations (after the multiplications), and LNS addition is slow because log-domain addition isn't normal addition, and cannot be easily hardware-accelerated.

Logarithmic weight arithmetic differs from normal weight multiplication. For weights greater than 1, the logarithm is positive and addition occurs; for positive fractional weights from 0..1, which are effectively a division, the logarithm is negative and subtraction is used (or adding of a negative value, equivalently). If the weight is exactly 1, the logarithm is exactly 0, and adding 0 is as harmless as multiplying by 1. Potentially, the technique could involve integers or floating-point numbers to represent the logarithm.

Literature review. Research papers on end-to-end LNS models:

  1. D. Miyashita, E. H. Lee, and B. Murmann, 2016, Convolutional neural networks using logarithmic data representation, arXiv preprint arXiv:1603.01025, 2016. https://arxiv.org/abs/1603.01025 (A major paper on using log-domain weights and activations, using addition of log-domain values instead of multiplication, which also covers the difficulties with accumulation.)
  2. G. Alsuhli, V. Sakellariou, H. Saleh, M. Al-Qutayri, 2023, Number Systems for Deep Neural Network Architectures: A Survey, https://arxiv.org/abs/2307.05035 (Extensive survey paper with a deep dive into the theory of LNS and other systems such as Residue Number System and Posit numbers, with application to neural networks. Also covers LNS usage with activation functions and Softmax.)
  3. Saeedeh Jahanshahi, Amir Sabbagh Molahosseini & Azadeh Alsadat Emrani Zarandi, 2023, uLog: a software-based approximate logarithmic number system for computations on SIMD processors, Journal of Supercomputing 79, pages 1750–1783 (2023), https://link.springer.com/article/10.1007/s11227-022-04713-y (Paper licensed under CC-BY-4.0, unchanged: http://creativecommons.org/licenses/by/4.0/)
  4. A. Sanyal, P. A. Beerel, and K. M. Chugg, 2020, Neural network training with approximate logarithmic computations, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 3122–3126. https://arxiv.org/abs/1910.09876 (End-to-end LNS model for both training and inference. Converts “leaky-ReLU” activation function and Softmax to log-domain.)
  5. J Zhao, S Dai, R Venkatesan, B Zimmer, 2022, LNS-Madam: Low-precision training in logarithmic number system using multiplicative weight update, IEEE Transactions on Computers, Vol. 71, No. 12, Dec 2022, https://ieeexplore.ieee.org/abstract/document/9900267/, PDF: https://ieeexplore.ieee.org/iel7/12/4358213/09900267.pdf (LNS in training of models. Uses different logarithm bases, including fractional powers of two, and LNS addition via table lookups.)
  6. E. H. Lee, D. Miyashita, E. Chai, B. Murmann, and S. S., 2017, Wong, Lognet: Energy-efficient neural networks using logarithmic computation, in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), March 2017, pp. 5900–5904. https://ieeexplore.ieee.org/document/7953288 (Uses LNS multiplication in the log-domain, but still does accumulate/addition in the linear-domain.)
  7. Maxime Christ, Florent de Dinechin, Frédéric Pétrot, 2022, Low-precision logarithmic arithmetic for neural network accelerators, 33rd IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP 2022), IEEE, Jul 2022, Gothenburg, Sweden. ff10.1109/ASAP54787.2022.00021ff. ffhal-03684585f, https://ieeexplore.ieee.org/abstract/document/9912091/, PDF: https://inria.hal.science/hal-03684585/document (Use of LNS in model inference, with coverage of dropping the sign bit and handling of zeros.)
  8. J. Johnson, 2018, Rethinking floating-point for deep learning, arXiv preprint arXiv:1811.01721, 2018, https://arxiv.org/abs/1811.01721 (Uses an end-to-end LNS version called “exact log-linear multiply-add (ELMA)” which is a “hybrid log multiply/linear add” method. Uses a Kulisch accumulator for addition.)

For more research papers on end-to-end LNS models, see https://www.aussieai.com/research/logarithmic#end2end.

Obstacles to Stardom

Several problems need to be overcome to use end-to-end LNS for models, including:

  • LNS addition is expensive
  • Hardware acceleration
  • Zero numbers
  • Negative numbers
  • Floating-point addition
  • LNS Transformer components

Addition problems. Addition and subtraction are slow and problematic in LNS-based systems, so must be approximated or accelerated in various ways. It seems ironic to need to accelerate addition, since the whole point of the use of LNS is to accelerate multiplication by changing it into addition! But it's two different types of addition: the original linear-domain multiplication changes to normal fast addition, but then the original addition needs to change to log-domain addition, which is hard.

Hardware acceleration is problematic. The simple vector dot product can be accelerated via the “fused-multiply-addition” (FMA) type of vectorized operations. The status of LNS vectorization is incomplete. Obviously, all CPUs and GPUs have accelerated vectorized addition. And there are several research hardware accelerations of LNS addition in the research literature. But what we really need is a fused version of normal addition with followup LNS addition, which is the equivalent of FMA in the linear domain.

Zero problems. Zero weights must be handled separately, since the logarithm of zero is infinite. This requires a test for zero as part of the logic, or an algorithmic method to avoid zero values (e.g. using an extra bit flag to represent zero-ness). Alternatively, a hardware version of LNS would need to handle a zero reasonably.

Negative number problems. Negative numbers are also problematic in the LNS, and models usually have both positive and negative weights. Since logarithms cannot be used on a negative number, the logarithm of the absolute value of the weight must be used, with an alternative method (e.g. sign bit) used to handle negative weights differently, so that the engine knows to subtract the weight's logarithm, rather than add in the LNS arithmetic. Alternatively, weights might be scaled so they are all positive, to avoid the log-of-negatives problem.

Does it work? The use of logarithmic numbers hasn't become widely used in AI models, possibly because vector dot product and matrix multiplication require not just multiplication, but addition of multiplications, and addition is difficult in LNS (usually approximate). Both training and inference need to be performed in LNS. Conversion back-and-forth between LNS and floating-point weights and probabilities also adds some overhead (in both training and inference), and possibly some more inaccuracy for inference. These issues might limit the model's accuracy compared to non-logarithmic floating-point.

Floating-point addition. Furthermore, an LNS model stores the logarithms of weights as floating-point numbers, and thus requires floating-point addition rather than integer addition. The gain from changing floating-point multiplication to floating-point addition is nowhere near as large as changing it to integer arithmetic operations (e.g. as used in logarithmic quantization or integer-only quantization methods). Indeed, paradoxically, there are even circumstances where floating-point addition is worse than floating-point multiplication, because addition requires sequential non-parallelizable sub-operations, but this depends on the hardware acceleration and the exact representation of floating-point numbers used.

No memory benefit. Another concern is that research papers report that AI model inference is usually memory-bound rather than CPU-bound, with the GPU waiting to receive data because reading it from RAM is slower. In memory-bound cases, the conversion of arithmetic from multiplication to addition does not address the main bottleneck, and the LNS may have reduced benefit. The LNS does not allow the use of smaller data sizes, since it stores logarithms of weights and internal computations as floating-point, whereas quantization can use integers or smaller bit widths.

Research-only. The use of end-to-end LNS models has not gone mainstream. Some of the problematic issues with additions involving weights and activation functions, and in relation to training with LNS weights, are described in Alsuhli et al. (2023). These concerns limit the use of LNS numbers in an end-to-end method, and suggest the use of alternatives such as approximate logarithmic multiplication or logarithm-antilogarithm multiplications (Alsuhli et al., 2023). Nevertheless, there are several attempts in the literature to use LNS for model training and inference in various ways, starting with Arnold et al. (1991), using theory dating back to the 1980s.

One final thought. Here's the funny thing about doing end-to-end LNS models: an AI model is already doing logarithms, so we're trying to do logarithms-of-logarithms. Remember that the logits output from a model are in the log-domain, and Softmax has to convert them by exponentiation, so they're in the linear-domain. Maybe there's a way to back it up a level, and use LNS for the log-domain computations in the model itself? My brain shuts down and screams for ice-cream whenever I try to think about this idea.

LNS Applications

Various research has been done into using LNS systems in applications in AI/ML, but where it's not an end-to-end model. Some of the research papers include:

  1. M. Arnold, J. Cowles T. Bailey, and J. Cupal, 1991, Implementing back propagation neural nets with logarithmic arithmetic, International AMSE conference on Neural Nets, San Diego, 1991.
  2. M. G. Arnold, T. A. Bailey, J. J. Cupal, and M. D. Winkel, 1997, On the cost effectiveness of logarithmic arithmetic for backpropagation training on SIMD processors, in Proceedings of International Conference on Neural Networks (ICNN’97), vol. 2. IEEE, 1997, pp. 933–936. https://ieeexplore.ieee.org/document/616150 (Possibly the earliest paper with consideration of LNS as applied to AI models.)
  3. Min Soo Kim; Alberto A. Del Barrio; Román Hermida; Nader Bagherzadeh, 2018, Low-power implementation of Mitchell’s approximate logarithmic multiplication for convolutional neural networks, in Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, 2018, pp. 617–622. https://ieeexplore.ieee.org/document/8297391 (Use of Mitchell's approximate multiplier in CNNs.)
  4. Giuseppe C. Calafiore, Stephane Gaubert, Member, Corrado Possieri, 2020, A Universal Approximation Result for Difference of log-sum-exp Neural Networks, https://arxiv.org/abs/1905.08503 (Use of a logarithmic activation function.)
  5. Giuseppe C. Calafiore, Stephane Gaubert, Corrado Possieri, Log-sum-exp neural networks and posynomial models for convex and log-log-convex data, IEEE Transactions on Neural Networks and Learning Systems, 2019, https://arxiv.org/abs/1806.07850
  6. U. Lotric and P. Bulic, 2011, Logarithmic multiplier in hardware implementation of neural networks, in International Conference on Adaptive and Natural Computing Algorithms. Springer, April 2011, pp. 158–168. https://dl.acm.org/doi/10.5555/1997052.1997071
  7. HyunJin Kim; Min Soo Kim; Alberto A. Del Barrio; Nader Bagherzadeh, 2019, A cost-efficient iterative truncated logarithmic multiplication for convolutional neural networks, IEEE 26th Symposium on Computer Arithmetic (ARITH), https://ieeexplore.ieee.org/abstract/document/8877474 (Uses logarithmic multiplication algorithm.)
  8. Gao M, Qu G, 2018, Estimate and recompute: a novel paradigm for approximate computing on data flow graphs, IEEE Trans Comput Aided Des Integr Circuits Syst 39(2):335–345. https://doi.org/10.1109/TCAD.2018.2889662, https://ieeexplore.ieee.org/document/8588387 (Uses LNS as the representation to do approximate arithmetic.)
  9. H. Kim, M. S. Kim, A. A. Del Barrio, and N. Bagherzadeh, 2019, A cost-efficient iterative truncated logarithmic multiplication for convolutional neural networks, in 2019 IEEE 26th Symposium on Computer Arithmetic (ARITH). IEEE, June 2019, pp. 108–111, https://ieeexplore.ieee.org/document/8877474
  10. Arnold, M.G., 2002, Reduced power consumption for MPEG decoding with LNS, Proceedings of the IEEE International Conference on Application-Specific Systems, Architectures and Processors (ASAP 2002), IEEE Computer Society Press, Los Alamitos (2002) https://ieeexplore.ieee.org/document/1030705 (MPEG signal processing and LNS.)
  11. E. E. Swartzlander, D. V. S. Chandra, H. T. Nagle and S. A. Starks, 1983, Sign/logarithm architecture for FFT implementation, IEEE Trans. Comput., vol. C-32, June 1983. https://ieeexplore.ieee.org/document/1676274 (FFT applications of LNS.)
  12. M. S. Ansari, V. Mrazek, B. F. Cockburn, L. Sekanina, Z. Vasicek, and J. Han, 2019, Improving the accuracy and hardware efficiency of neural networks using approximate multipliers, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 28, no. 2, pp. 317–328, Oct 2019, https://ieeexplore.ieee.org/document/8863138
  13. Basetas C., Kouretas I., Paliouras V., 2007, Low-power digital filtering based on the logarithmic number system, International Workshop on Power and Timing Modeling, Optimization and Simulation. Springer, pp 546–555. https://doi.org/10.1007/978-3-540-74442-9_53, https://link.springer.com/chapter/10.1007/978-3-540-74442-9_53 (LNS in signal processing algorithms.)
  14. Biyanu Zerom, Mohammed Tolba, Huruy Tesfai, Hani Saleh, Mahmoud Al-Qutayri, Thanos Stouraitis, Baker Mohammad, Ghada Alsuhli, 2022, Approximate Logarithmic Multiplier For Convolutional Neural Network Inference With Computational Reuse, 2022 29th IEEE International Conference on Electronics, Circuits and Systems (ICECS), 24-26 October 2022, https://doi.org/10.1109/ICECS202256217.2022.9970861, https://ieeexplore.ieee.org/abstract/document/9970861/
  15. M. S. Ansari, B. F. Cockburn, and J. Han, 2020, An improved logarithmic multiplier for energy-efficient neural computing, IEEE Transactions on Computers, vol. 70, no. 4, pp. 614–625, May 2020. https://ieeexplore.ieee.org/document/9086744
  16. Tso-Bing Juang; Cong-Yi Lin; Guan-Zhong Lin, 2018, Area-delay product efficient design for convolutional neural network circuits using logarithmic number systems, in International SoC Design Conference (ISOCC). IEEE, 2018, pp. 170–171, https://ieeexplore.ieee.org/abstract/document/8649961
  17. M Arnold, 2023, Machine Learning using Logarithmic Arithmetic with Preconditioned Input to Mitchell's Method, 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), https://ieeexplore.ieee.org/document/10168554
  18. J. Bernstein, J. Zhao, M. Meister, M. Liu, A. Anandkumar, and Y. Yue, 2020, Learning compositional functions via multiplicative weight updates, in Proc. Adv. Neural Inf. Process. Syst. 33: Annu. Conf. Neural Inf. Process. Syst., 2020. https://proceedings.neurips.cc/paper/2020/hash/9a32ef65c42085537062753ec435750f-Abstract.html
  19. Mark Arnold; Ed Chester; Corey Johnson, 2020, Training neural nets using only an approximate tableless LNS ALU, 2020 IEEE 31st International Conference on Application-specific Systems, Architectures and Processors (ASAP), DOI: 10.1109/ASAP49362.2020.00020, https://ieeexplore.ieee.org/document/9153225
  20. J Cai, 2022, Log-or-Trig: Towards efficient learning in deep neural networks, Thesis, Graduate School of Engineering, Tokyo University of Agriculture and Technology, https://tuat.repo.nii.ac.jp/?action=repository_action_common_download&item_id=1994&item_no=1&attribute_id=16&file_no=3, PDF: https://tuat.repo.nii.ac.jp/index.php?action=pages_view_main&active_action=repository_action_common_download&item_id=1994&item_no=1&attribute_id=16&file_no=1&page_id=13&block_id=39
  21. Yu-Hsiang Huang; Gen-Wei Zhang; Shao-I Chu; Bing-Hong Liu; Chih-Yuan Lien; Su-Wen Huang, 2023, Design of Logarithmic Number System for LSTM, 2023 9th International Conference on Applied System Innovation (ICASI) https://ieeexplore.ieee.org/abstract/document/10179504/
  22. TY Cheng, Y Masuda, J Chen, J Yu, M Hashimoto, 2020, Logarithm-approximate floating-point multiplier is applicable to power-efficient neural network training, Integration, Volume 74, September 2020, Pages 19-31, https://www.sciencedirect.com/science/article/abs/pii/S0167926019305826 (Has some theory of log-domain operations for LNS; uses bitwidth scaling and logarithmic approximate multiplication.)
  23. TaiYu Cheng, Jaehoon Yu, M. Hashimoto, July 2019, Minimizing power for neural network training with logarithm-approximate floating-point multiplier, 2019 29th International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS), https://www.semanticscholar.org/paper/Minimizing-Power-for-Neural-Network-Training-with-Cheng-Yu/ab190dd47e4c16949276f98052847d1314d76543
  24. Mingze Gao; Gang Qu, 2017, Energy efficient runtime approximate computing on data flow graphs, 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), Nov. 2017, pp. 444–449, https://ieeexplore.ieee.org/document/8203811
  25. T. Cheng, et al., July 2019, Minimizing power for neural network training with logarithm-approximate floating-point multiplier, 2019 29th International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS), DOI:10.1109/PATMOS.2019.8862162, https://www.researchgate.net/publication/336439575_Minimizing_Power_for_Neural_Network_Training_with_Logarithm-Approximate_Floating-Point_Multiplier
  26. J Xu, Y Huan, LR Zheng, Z Zou, 2018, A low-power arithmetic element for multi-base logarithmic computation on deep neural networks, 2018 31st IEEE International System-on-Chip Conference (SOCC), https://ieeexplore.ieee.org/document/8618560
  27. MA Qureshi, A Munir, 2020, NeuroMAX: a high throughput, multi-threaded, log-based accelerator for convolutional neural networks, 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD), https://ieeexplore.ieee.org/document/9256558, PDF: https://dl.acm.org/doi/pdf/10.1145/3400302.3415638
  28. Min Soo Kim, 2020, Cost-Efficient Approximate Log Multipliers for Convolutional Neural Networks, Ph.D. thesis, Electrical and Computer Engineering, University of California, Irvine, https://search.proquest.com/openview/46b6f28a9f1e4013a01f128c36753d83/1?pq-origsite=gscholar&cbl=18750&diss=y, PDF: https://escholarship.org/content/qt3w4980x3/qt3w4980x3.pdf (Examines multiple approximate log multipliers and their effect on model accuracy.)
  29. G. Anushaa, K. C. Sekharb, B S Sridevic, Nukella Venkateshd, 2023, The Journey of Logarithm Multiplier: Approach, Development and Future Scope, Recent Developments in Electronics and Communication Systems, https://www.researchgate.net/publication/367067187_The_Journey_of_Logarithm_Multiplier_Approach_Development_and_Future_Scope

For more research papers on applications of LNS models, see https://www.aussieai.com/research/logarithmic#applications.

LNS Addition

The biggest area of research is speeding up of LNS addition. Log-domain addition and subtraction are problematic and requires Gaussian logarithm functions to compute. There are various research papers that cover different approximations or methods such as Look-Up Tables (LUTs), Taylor series, interpolations, co-transformations, and other methods.

Other related research. There are several other areas of AI theory that are relevant to LNS addition. Because LNS addition is computing exponentials of log-domain values (i.e. antilogarithms), adding them, and then re-converting them to log-domain, this is a “log of a sum of exponentials” calculation, which is the same as “log-sum-exp networks”. Also, the “sum of exponentials” is the same calculation required for part of Softmax calculations (the denominator), so both hardware acceleration of Softmax (see Chapter 25), and the research theory of Softmax approximation, are relevant. Finally, since the use of the maximum function is one way to approximate LNS addition, the theory of “max-plus networks” based on “tropical algebra” is relevant to optimizing LNS addition.

LNS addition research papers. A lot of work has been done on optimizing LNS addition. Research papers include:

  1. Wikipedia, 2023, Gaussian logarithm, https://en.wikipedia.org/wiki/Gaussian_logarithm
  2. Kouretas I, Basetas C, Paliouras V, 2012, Low-power logarithmic number system addition/subtraction and their impact on digital filters, IEEE Trans Comput 62(11):2196–2209. https://doi.org/10.1109/TC.2012.111, https://ieeexplore.ieee.org/document/6212439 (Coverage of LNS theory, including improvements to addition/subtraction, in relation to digital signal processing.)
  3. I. Orginos, V. Paliouras, and T. Stouraitis, 1995, A novel algorithm for multi-operand Logarithmic Number System addition and subtraction using polynomial approximation, in Proceedings of the 1995 IEEE International Symposium on Circuits and Systems (ISCAS’95), pp. III.1992–III.1995, 1995. https://ieeexplore.ieee.org/document/523812
  4. S. A. Alam, J. Garland, and D. Gregg, 2021, Low-precision logarithmic number systems: Beyond base-2, ACM Transactions on Architecture and Code Optimization (TACO), vol. 18, no. 4, pp. 1–25, 2021. https://arxiv.org/abs/2102.06681 (Covers LNS arithmetic in different bases, with coverage of LNS addition improvements.)
  5. A. Sanyal, P. A. Beerel, and K. M. Chugg, 2020, Neural network training with approximate logarithmic computations, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 3122–3126. https://arxiv.org/abs/1910.09876 (End-to-end LNS paper that also covers addition approximations.)
  6. M. Arnold, J. Cowles T. Bailey, and J. Cupal, 1991, Implementing back propagation neural nets with logarithmic arithmetic, International AMSE conference on Neural Nets, San Diego, 1991. (Use of LUTs for LNS addition.)
  7. M. G. Arnold, T. A. Bailey, J. J. Cupal, and M. D. Winkel, 1997, On the cost effectiveness of logarithmic arithmetic for backpropagation training on SIMD processors, in Proceedings of International Conference on Neural Networks (ICNN’97), vol. 2. IEEE, 1997, pp. 933–936. https://ieeexplore.ieee.org/document/616150 (Uses LUTs for LNS addition.) li> J. Johnson, 2020, Efficient, arbitrarily high precision hardware logarithmic arithmetic for linear algebra, in Proc. IEEE 27th Symp. Comput. Arithmetic, 2020, pp. 25–32, https://arxiv.org/abs/2004.09313 (Dual-base LNS algorithm, attempting to solve the LNS addition bottleneck.)
  8. P. D. Vouzis, S. Collange and M. G. Arnold, 2008, A Novel Cotransformation for LNS Subtraction, J. Signal Process. Syst., vol. 58, no. 1, pp. 29-40, Oct. 2008. https://doi.org/10.1007/s11265-008-0282-7, https://link.springer.com/article/10.1007/s11265-008-0282-7 (Improving LNS subtraction algorithms.)
  9. D. M. Lewis, 1990, An architecture for addition and subtraction of long word length numbers in the logarithmic number system, ZEEE Trans. Compur., vol. 39, pp. 1325-1336, Nov. 1990. https://ieeexplore.ieee.org/document/61042
  10. H. Henkel, 1989, Improved Addition for the Logarithmic Number System, IEEE Trans. Acoustics Speech and Signal Processing, no. 2, pp. 301-303, Feb. 1989. https://ieeexplore.ieee.org/document/21694 (Improved lookup tables for LNS addition.)
  11. E. E. Swartzlander and A. G. Alexopoulos, 1975, The sign/logarithm number system, IEEE Trans. Comput., vol. C-24, pp. 1238-1242, Dec. 1975. IEEE Transactions on Computers ( Volume C-24, Issue 12, December 1975), https://ieeexplore.ieee.org/document/1672765 (Handle negative numbers in LNS with sign bits.)
  12. I. Kouretas, C. Basetas and V. Paliouras, 2008, Low-Power Logarithmic Number System Addition/Subtraction and their Impact on Digital Filters, Proc. IEEE Int'l Symp. Circuits and Systems (ISCAS '08), pp. 692-695, 2008. https://ieeexplore.ieee.org/document/4541512 (Improvements for slow addition/subtraction of LNS numbers.)
  13. M. Arnold and S. Collange, 2011, A Real/Complex Logarithmic Number System ALU, IEEE Trans. Computers, vol. 60, no. 2, pp. 202-213, Feb. 2011. https://ieeexplore.ieee.org/document/5492676 (Hardware FPGAs and improvements to LNS addition.)
  14. R.C. Ismail and J.N. Coleman, 2011, ROM-less LNS, Proc. IEEE Symp. Computer Arithmetic, pp. 43-51, 2011. https://ieeexplore.ieee.org/document/5992107 (Improvements to LNS addition and lookup tables.)
  15. R. Muscedere, V. Dimitrov, G. Jullien and W. Miller, 2005, Efficient Techniques for Binary-to-Multidigit Multidimensional Logarithmic Number System Conversion using Range-Addressable Look-Up Tables, IEEE Trans. Computers, vol. 54, no. 3, pp. 257-271, Mar. 2005. https://ieeexplore.ieee.org/document/1388191 (Presents multidimensional logarithmic number system (MDLNS) and improvements to LNS addition lookup tables.)
  16. D Primeaux, 2005, Programming with Gaussian logarithms to compute the approximate addition and subtraction of very small (or very large) positive numbers, Sixth International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing and First ACIS International Workshop on Self-Assembling Wireless Network, https://ieeexplore.ieee.org/document/1434878/
  17. CH Cotter, 1971, Gaussian logarithms and navigation, The Journal of Navigation, cambridge.org, https://www.cambridge.org/core/journals/journal-of-navigation/article/gaussian-logarithms-and-navigation/411E21946EDD70EE4208912BE743C5FB PDF: http://www.siranah.de/sources/Gaussian_Logarithms_and_Navigation.pdf
  18. Paliouras, V., and Stouraitis, T., 1996. A novel algorithm for accurate logarithmic number system subtraction, Proceedings of the IEEE International Symposium on Circuits and Systems (ICAS 96). Atlanta, USA, pp. 268-271. https://ieeexplore.ieee.org/document/542021
  19. MG Arnold, 2004, LPVIP: A low-power ROM-less ALU for low-precision LNS, International Workshop on Power and Timing Modeling, https://link.springer.com/chapter/10.1007/978-3-540-30205-6_69, PDF: https://www.researchgate.net/profile/Mark-Arnold-13/publication/220799326_LPVIP_A_low-power_ROM-less_ALU_for_low-precision_LNS/links/54172bc30cf2218008bed8c8/LPVIP-A-low-power-ROM-less-ALU-for-low-precision-LNS.pdf
  20. MG Arnold, J Cowles, T Bailey, 1988, Improved accuracy for logarithmic addition in DSP applications, ICASSP-88, https://ieeexplore.ieee.org/document/196947, PDF: https://www.computer.org/csdl/proceedings-article/icassp/1988/00196947/12OmNvlPkAA
  21. J. N. Coleman, R.C. Ismail, 2015, LNS with Co-Transformation Competes with Floating-Point, January 2015, IEEE Transactions on Computers 65(1):1-1, DOI:10.1109/TC.2015.2409059, https://ieeexplore.ieee.org/document/7061396, PDF: https://www.researchgate.net/publication/273914447_LNS_with_Co-Transformation_Competes_with_Floating-Point
  22. Siti Zarina Md Naziri; Rizalafande Che Ismail; Ali Yeon Md Shakaff, December 2014 The Design Revolution of Logarithmic Number System Architecture, DOI:10.13140/RG.2.1.3494.4166 Conference: 2014 2nd International Conference on Electrical, Electronic and Systems Engineering (ICEESE 2104)At: Berjaya Times Square, Kuala Lumpur, Malaysia, https://ieeexplore.ieee.org/document/7154603 (Good survey of LNS addition methods up to 2014.)
  23. Siti Zarina Md Naziri, Rizalafande Che Ismail, Ali Yeon Md Shakaff, 2016, Implementation of LNS addition and subtraction function with co-transformation in positive and negative region: A comparative analysis, Aug 2016, https://www.researchgate.net/publication/312159669_Implementation_of_LNS_addition_and_subtraction_function_with_co-transformation_in_positive_and_negative_region_A_comparative_analysis, PDF: https://www.researchgate.net/publication/287533531_Arithmetic_Addition_and_Subtraction_Function_of_Logarithmic_Number_System_in_Positive_Region_An_Investigation/link/56777a6208ae125516ec1034/download
  24. Siti Zarina Md Naziri; Rizalafande Che Ismail; Ali Yeon Md Shakaff, Dec 2015, Arithmetic Addition and Subtraction Function of Logarithmic Number System in Positive Region: An Investigation, 2015 IEEE Student Conference on Research and Development (SCOReD), https://ieeexplore.ieee.org/document/7449376
  25. G Tsiaras, V Paliouras, 2017, Multi-operand logarithmic addition/subtraction based on Fractional Normalization, 2017 6th International Conference on Modern Circuits and Systems Technologies (MOCAST), https://ieeexplore.ieee.org/abstract/document/7937686/
  26. G. Tsiaras and V. Paliouras, 2017, Logarithmic Number System addition-subtraction using Fractional Normalization, in IEEE International Symposium on Circuits and Systems (ISCAS), 2017. https://ieeexplore.ieee.org/document/8050569
  27. B Parhami, 2020, Computing with logarithmic number system arithmetic: Implementation methods and performance benefits, Computers & Electrical Engineering, Elsevier, PDF: https://web.ece.ucsb.edu/~parhami/pubs_folder/parh20-cee-comp-w-lns-arithmetic-final.pdf (Overview of LNS including LNS addition and hardware implementations.)
  28. B. Parhami, “Computing with Logarithmic Number System Arithmetic (Extended Online Version with More Reference Citations),” August 2020. https://web.ece.ucsb.edu/~parhami/pubs_folder/parh20-caee-comput-w-lns-arith.pdf
  29. R.C Ismail; R. Hussin; S.A.Z Murad, 2012, Interpolator algorithms for approximating the LNS addition and subtraction: Design and analysis, 2012 IEEE International Conference on Circuits and Systems (ICCAS), https://ieeexplore.ieee.org/document/6408336, PDF: https://www.researchgate.net/profile/Sohiful-Anuar-Zainol-Murad/publication/259921136_Interpolator_Algorithms_for_Approximating_the_LNS_Addition_and_Subtraction_Design_and_Analysis/links/02e7e52e8a7bef3d52000000/Interpolator-Algorithms-for-Approximating-the-LNS-Addition-and-Subtraction-Design-and-Analysis.pdf
  30. C Chen, 2009, Error analysis of LNS addition/subtraction with direct-computation implementation, IET Computers & Digital Techniques, Volume 3, Issue 4, https://digital-library.theiet.org/content/journals/10.1049/iet-cdt.2008.0098
  31. Chichyang Chen; Rui-Lin Chen; Chih-Huan Yang, 2000, Pipelined computation of very large word-length LNS addition/subtraction with polynomial hardware cost, IEEE Transactions on Computers (Volume 49, Issue 7, July 2000), https://ieeexplore.ieee.org/document/863041
  32. Siti Zarina Md Naziri; Rizalafande Che Ismail; Ali Yeon Md Shakaff, 2016, An Analysis of Interpolation Implementation for LNS Addition and Subtraction Function in Positive Region, 2016 International Conference on Computer and Communication Engineering (ICCCE) https://ieeexplore.ieee.org/abstract/document/7808368/
  33. I Osinin, 2019, Optimization of the hardware costs of interpolation converters for calculations in the logarithmic number system, International Conference on Information Technologies, ICIT 2019: Recent Research in Control Engineering and Decision Making, pp. 91–102, https://link.springer.com/chapter/10.1007/978-3-030-12072-6_9

For more research papers on addition and subtraction issues for LNS models, see https://www.aussieai.com/research/logarithmic#addition.

LNS Hardware Acceleration

Much research has gone into accelerating LNS operations, particularly LNS addition, with hardware algorithms. Papers on the use of the LNS in hardware-accelerated implementations include:

  1. Manik Chugh; Behrooz Parhami, 2013, Logarithmic Arithmetic as an Alternative to Floating-Point: a Review, Proc. 47th Asilomar Conf. Signals, Systems, and Computers (November 2013), https://ieeexplore.ieee.org/document/6810472, PDF: https://web.ece.ucsb.edu/~parhami/pubs_folder/parh13-asilo-log-arith-as-alt-to-flp.pdf (A survey paper covering the use of LNS in custom accelerated hardware implementations.)
  2. F.J. Taylor, 1983, An Extended Precision Logarithmic Number System, IEEE Trans. Acoustics, Speech, and Signal Processing (1983), https://ieeexplore.ieee.org/document/910929
  3. Parhami B., 2020, Computing with logarithmic number system arithmetic: Implementation methods and performance benefits, Comput Electr Eng 87:106800. https://doi.org/10.1016/j.compeleceng.2020.106800, https://www.sciencedirect.com/science/article/abs/pii/S0045790620306534
  4. Gautschi M, Schaffner M, Gürkaynak FK, Benini L, 2016. 4.6 A 65nm CMOS 6.4-to-29.2 pJ/FLOP@ 0.8 V shared logarithmic floating-point unit for acceleration of nonlinear function kernels in a tightly coupled processor cluster, 2016 IEEE International Solid-State Circuits Conference (ISSCC), 2016. IEEE, pp 82–83. https://doi.org/10.1109/ISSCC.2016.7417917, https://ieeexplore.ieee.org/document/7417917
  5. Coleman JN, Softley CI, Kadlec J, Matousek R, Tichy M, Pohl Z, Hermanek A, Benschop NF, 2008, The European logarithmic microprocessor, IEEE Trans Comput 57(4):532–546. https://doi.org/10.1109/TC.2007.70791 https://ieeexplore.ieee.org/document/4358243 (A European project for LNS in hardware called the European logarithmic microprocessor or ELM.)
  6. Coleman JN, Chester E, Softley CI, Kadlec J, 2000, Arithmetic on the European logarithmic microprocessor, IEEE Trans Comput 49(7):702–715. https://doi.org/10.1109/12.863040, https://ieeexplore.ieee.org/document/863040 (More about the European project for LNS in hardware.)
  7. S. Huang, L.-G. Chen and T.-H. Chen, 1994, The chip design of a 32-b Logarithmic Number System, Proc. of ISCAS94, May 1994, https://ieeexplore.ieee.org/document/409224, PDF: http://ntur.lib.ntu.edu.tw/bitstream/246246/2007041910032469/1/00409224.pdf (Theory of a chip design for 32-bits LNS.)
  8. D. Lewis and L. Yu, 1989, Algorithm design for a 30 bit integrated logarithmic processor, Proc. of 9th Symp. on Computer Arithmetic, pp. 192-199, 1989. https://ieeexplore.ieee.org/document/72826 (30-bit LNS hardware.)
  9. T. Stouraitis and F. Taylor, 1988, Analysis of Logarithmic Number System processors, IEEE Transactions on Circuits and Systems, vol. 35, pp. 519-527, May 1988. https://ieeexplore.ieee.org/document/1779
  10. T. Stouraitis, S. Natarajan and F. Taylor, 1985, A reconfiguration systolic primitive processor for signal processing, IEEE Int. Conf. on ASSP, March 1985, https://ieeexplore.ieee.org/document/1168508
  11. Krishnendu Mukhopadhyaya, 1995, Implementation of Four Common Functions on an LNS CoProcessor, IEEE Transactions on Computers, https://ieeexplore.ieee.org/document/367997, PDF: https://www.isical.ac.in/~krishnendu/LNS-IEEE-TC.pdf
  12. Durgesh Nandan; Jitendra Kanungo; Anurag Mahajan, 2017, An efficient VLSI architecture for iterative logarithmic multiplier, 2017 4th International Conference on Signal Processing and Integrated Networks (SPIN), February 2017, https://ieeexplore.ieee.org/document/8049986 (Uses LNS and Mitchell's approximate multiplication algorithm.)
  13. Durgesh Nandan, Jitendra Kanungo, Anurag Mahajan, 2017, An Efficient VLSI Architecture Design for Logarithmic Multiplication by Using the Improved Operand Decomposition, In: Integration, Volume 58, June 2017, Pages 134-141, https://doi.org/10.1016/j.vlsi.2017.02.003, https://www.sciencedirect.com/science/article/abs/pii/S0167926017300895 (Uses LNS and Mitchell's approximate multiplication algorithm.)
  14. Siti Zarina Md Naziri; Rizalafande Che Ismail; Ali Yeon Md Shakaff, 2014, The Design Revolution of Logarithmic Number System Architecture, 2014 2nd International Conference on Electrical, Electronics and System Engineering (ICEESE), DOI: 10.1109/ICEESE.2014.7154603, https://doi.org/10.1109/ICEESE.2014.7154603, https://ieeexplore.ieee.org/document/7154603
  15. J. N. Coleman and E. I. Chester, 1999, A 32-Bit Logarithmic Arithmetic Unit and Its Performance Compared to Floating-Point, Proc. 14th IEEE Symp. Computer Arithmetic, 1999, pp. 142-151, https://ieeexplore.ieee.org/document/762839 (32-bit arithmetic in an early European project for LNS in hardware.)
  16. F. J. Taylor, R. Gill, J. Joseph, and J. Radke, 1988, A 20 Bit Logarithmic Number System Processor, IEEE Trans. Computers, Vol. 37, pp. 190-200, 1988. https://ieeexplore.ieee.org/document/2148 (A 1988 hardware 20-bit version of logarithmic numbers.)
  17. J. N. Coleman, C. I. Softley, J. Kadlec, R. Matousek, M. Licko, Z. Pohl, and A. Hermanek, 2003, Performance of the European Logarithmic Microprocessor, Proc. SPIE Annual Meeting, 2003, pp. 607-617. https://www.semanticscholar.org/paper/Performance-of-the-European-logarithmic-Coleman-Softley/7a324cd01bd1f4a25d70dfe6875474c9b92a3d9c
  18. Haohuan Fu; Oskar Mencer; Wayne Luk, 2006, Comparing Floating-Point and Logarithmic Number Representations for Reconfigurable Acceleration, 2006 IEEE International Conference on Field Programmable Technology, https://ieeexplore.ieee.org/document/4042464 (Evaluates LNS vs floating-point for FPGAs.)
  19. J.N. Coleman; C.I. Softley; J. Kadlec; R. Matousek; M. Licko; Z. Pohl; A. Hermanek, 2001, The European Logarithmic Microprocessor - a QR RLS application, Engineering, Computer Science Conference Record of Thirty-Fifth Asilomar… 2001 https://ieeexplore.ieee.org/document/986897
  20. H. Kim; B.-G. Nam; J.-H. Sohn; J.-H. Woo; H.-J. Yoo, 2006, A 231-MHz, 2.18-mW 32-bit Logarithmic Arithmetic Unit for Fixed-Point 3-D Graphics System, IEEE J. Solid-State Circuits (Volume 41, Issue 11, November 2006) https://ieeexplore.ieee.org/document/1717660
  21. T. Stouraitis, 1989, A hybrid floating-point/logarithmic number system digital signal processor, Int. Conf. Acoust. Speech Signal Process., 1989. https://ieeexplore.ieee.org/document/266619
  22. I. Kouretas and V. Paliouras, 2018, Logarithmic number system for deep learning, in International Conference on Modern Circuits and Systems Technologies (MOCAST). IEEE, 2018, pp. 1–4, https://ieeexplore.ieee.org/abstract/document/8376572
  23. J. H. Lang, C. A. Zukowski, R. O. LaMaire, and C. H. An, 1985, Integrated-Circuit Logarithmic Units, IEEE Trans. Computers, Vol. 34, pp. 475-483, 1985. https://ieeexplore.ieee.org/document/1676588 (Hardware version of logarithmic numbers from 1985.)
  24. D. Yu and D. M. Lewis, 1991, A 30-b Integrated Logarithmic Number System Processor, IEEE J. Solid-State Circuits, Vol. 26, pp. 1433-1440, 1991. https://www.scribd.com/document/41667733/A-30-b-Integrated-Logarithmic-Number-System-Processor-91 (An early 1991 hardware version of LNS with 30-bits.)
  25. V. Paliouras, J. Karagiannis, G. Aggouras, and T. Stouraitis, 1998, A Very-Long Instruction Word Digital Signal Processor Based on the Logarithmic Number System, Proc. 5th IEEE Int’l Conf. Electronics, Circuits and Systems, Vol. 3, pp. 59-62, 1998. https://ieeexplore.ieee.org/document/813936 (A hardware version of LNS from 1998.)
  26. M. G. Arnold, 2003, A VLIW Architecture for Logarithmic Arithmetic, Proc. Euromicro Symp. Digital System Design, 2003, pp. 294-302. https://ieeexplore.ieee.org/document/1231957?arnumber=1231957 (A hardware version of LNS in 2003 using Very Long Instruction Word (VLIW).)
  27. Rizalafande Che Ismail, Sep 2012, Fast, area-efficient 32-bit LNS for computer arithmetic operations, Ph.D. Thesis, Newcastle University, https://theses.ncl.ac.uk/jspui/handle/10443/1702, PDF: https://theses.ncl.ac.uk/jspui/bitstream/10443/1702/1/Che%20Ismail%2012.pdf
  28. M.G. Arnold, T.A. Bailey, J.R. Cowles and JJ. Cupal, 1990, Redundant Logarithmic Arithmetic, IEEE Trans Computers, vol. 39, No. 8, pp. 1077-1086, 1990 https://ieeexplore.ieee.org/abstract/document/57046
  29. Joshua Yung Lih Low; Ching Chuen Jong, 2017, Range Mapping—A Fresh Approach to High Accuracy Mitchell-Based Logarithmic Conversion Circuit Design, IEEE Transactions on Circuits and Systems I: Regular Papers ( Volume 65, Issue 1, January 2018) https://ieeexplore.ieee.org/abstract/document/7968344/
  30. D.M. Lewis, 1995, 114 MFLOPS Logarithmic Number System Arithmetic Unit for DSP Applications, IEEE J. Solid-State Circuits, vol. 30, pp 1547-1553,1995 https://ieeexplore.ieee.org/document/482205
  31. D. M. Lewis, 1994, Interleaved memory function interpolators with application to an accurate LNS arithmetic unit, IEEE Trans. Computers, Vol. 43, No. 8, pp.974-982, 1994. https://ieeexplore.ieee.org/document/295859
  32. P Lee, E Costa, S McBader, 2007, LogTOTEM: A logarithmic neural processor and its implementation on an FPGA fabric, 2007 International Joint Conference on Neural Networks, https://ieeexplore.ieee.org/abstract/document/4371396/, PDF: https://www.academia.edu/download/46203834/LogTOTEM_A_Logarithmic_Neural_Processor_20160603-13176-1fohbpz.pdf
  33. Peter Lee, 2007, A VLSI implementation of a digital hybrid-LNS neuron, 2007 International Symposium on Integrated Circuits, https://ieeexplore.ieee.org/document/4441783, PDF: https://www.researchgate.net/profile/Peter-Lee-48/publication/4315642_A_VLSI_implementation_of_a_digital_hybrid-LNS_neuron/links/0c96051dd594f5a004000000/A-VLSI-implementation-of-a-digital-hybrid-LNS-neuron.pdf
  34. Pramod Kumar Meher and Thanos Stouraitis (editors), 15 September 2017. Arithmetic Circuits for DSP Applications, https://www.amazon.com/Arithmetic-Circuits-Applications-Pramod-Kumar/dp/1119206774/
  35. R.C Ismail; M.K Zakaria; S.A.Z Murad, 2013, Hybrid logarithmic number system arithmetic unit: A review, in IEEE ICCAS, Sept 2013, pp. 55–58. https://ieeexplore.ieee.org/document/6671617
  36. Haohuan Fu; Oskar Mencer; Wayne Luk, 2007, Optimizing logarithmic arithmetic on FPGAs, 15th Annual IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM 2007), https://ieeexplore.ieee.org/abstract/document/4297253/, PDF: https://spiral.imperial.ac.uk/bitstream/10044/1/5934/1/optlns.pdf
  37. Barry Lee & Neil Burgess, 2003, A dual-path logarithmic number system addition/subtraction scheme for FPGA, Springer, International Conference on Field Programmable Logic and Applications, FPL 2003: Field Programmable Logic and Application, pp. 808–817, https://link.springer.com/chapter/10.1007/978-3-540-45234-8_78
  38. G Anusha, KC Sekhar, BS Sridevi, 2023, The Journey of Logarithm Multiplier: Approach, Development and Future Scope, In: Recent Developments in Electronics and Communication Systems, KVS Ramachandra Murthy et al. (Eds.) IOS Press, https://ebooks.iospress.nl/pdf/doi/10.3233/ATDE221243, https://www.researchgate.net/publication/367067187_The_Journey_of_Logarithm_Multiplier_Approach_Development_and_Future_ScopeF
  39. B Zerom, M Tolba, H Tesfai, H Saleh, 2022, Approximate Logarithmic Multiplier For Convolutional Neural Network Inference With Computational Reuse, 2022 29th IEEE International Conference on Electronics, Circuits and Systems (ICECS), https://ieeexplore.ieee.org/document/9970861 (Combines the Logarithmic Number System, Mitchell's approximate multiplication algorithm, and data reuse strategies to speed up MAC operations.)

For more research papers on hardware acceleration issues for LNS models, see https://www.aussieai.com/research/logarithmic#hardware.

LNS Mathematical and Algorithmic Theory

Papers on the mathematical basis of the Logarithmic Number System (LNS) and its applied algorithms in theory include:

  1. Behrooz Parhami, 2010, Computer Arithmetic: Algorithms and Hardware Designs, 2010, Oxford University Press, New York, NY, https://web.ece.ucsb.edu/~parhami/text_comp_arit.htm, https://books.google.com.au/books/about/Computer_Arithmetic.html?id=tEo_AQAAIAAJ&redir_esc=y
  2. Molahosseini AS, De Sousa LS, Chang C-H, 2017, Embedded systems design with special arithmetic and number systems, Springer. Book on Amazon: https://www.amazon.com/Embedded-Systems-Design-Special-Arithmetic-ebook/dp/B06XRVG3YF/, https://doi.org/10.1007/978-3-319-49742-6, https://link.springer.com/book/10.1007/978-3-319-49742-6 (A text that contains multiple papers on LNS and RNS.)
  3. B. Parhami, 2020, Computing with logarithmic number system arithmetic: Implementation methods and performance benefits, Computers & Electrical Engineering, vol. 87, p. 106800, 2020. https://www.sciencedirect.com/science/article/abs/pii/S0045790620306534
  4. Arnold, M.G., Bailey, T.A., Cowles, J.R., Winkel, M.D., 1992, Applying features of the IEEE 754 to sign/logarithm arithmetic, IEEE Transactions on Computers 41, 1040–1050 (1992) https://ieeexplore.ieee.org/document/156547
  5. Paliouras, V., Stouraitis, T., 2001, Low-power properties of the Logarithmic Number System, Proceedings of 15th Symposium on Computer Arithmetic (ARITH15), Vail, CO, June 2001, pp. 229–236 (2001) https://ieeexplore.ieee.org/document/930124
  6. Paliouras, V., Stouraitis, T., 2000, Logarithmic number system for low-power arithmetic, In: Soudris, D.J., Pirsch, P., Barke, E. (eds.) PATMOS 2000. LNCS, vol. 1918, pp. 285–294. Springer, Heidelberg (2000), https://link.springer.com/chapter/10.1007/3-540-45373-3_30
  7. T. Stouraitis, 1986, Logarithmic Number System: Theory analysis and design, University of Florida, Ph.D. dissertation, University of Florida ProQuest Dissertations Publishing,  1986. 8704221 https://www.proquest.com/openview/0f48dddc19ec62058062ae1b32ee981d/1, https://openlibrary.org/books/OL25923701M/Logarithmic_number_system_theory_analysis_and_design
  8. F. J. Taylor, 1985, A hybrid floating-point logarithmic number system processor, IEEE Trans. Circuits Syst., vol. CAS-32, pp. 92-95, Jan. 1985. https://ieeexplore.ieee.org/abstract/document/1085588
  9. M. L. Frey and F. J. Taylor, 1985, A table reduction technique for logarithmically architected digital filters, IEEE Trans. Acoust Speech Signal Processing, vol. ASSP-33, pp. 718-719, June 1985. https://ieeexplore.ieee.org/document/1164597
  10. E. E. Swartzlander, D. V. S. Chandra, H. T. Nagle and S. A. Starks, 1983, Sign/logarithm arithmetic for FFT implementation, IEEE Trans. Comput., vol. C-32, pp. 526-534, June 1983. https://ieeexplore.ieee.org/document/1676274
  11. G. L. Sicuranza, 1983, On efficient implementations of 2-D digital filters using logarithmic number systems, IEEE Trans. Acoust. Speech Signal Processing, vol. ASSP-31, pp. 877-885, Aug. 1983. https://ieeexplore.ieee.org/document/1164149 (Algorithms for LNS arithmetic.)
  12. M. L. Frey and F. J. Taylor, 1985, A table reduction technique for logarithmically architected digital filters, IEEE Trans. Acoust. Speech Signal Processing, vol. ASSP-33, pp. 719-719, June 1985. https://ieeexplore.ieee.org/document/1164597 (Reducing lookup table sizes for LNS.)
  13. H. Fu, O. Mencer and W. Luk, 2010, FPGA Designs with Optimized Logarithmic Arithmetic, IEEE Trans. Computers, vol. 59, no. 7, pp. 1000-1006, July 2010. https://ieeexplore.ieee.org/document/5416693 (LNS on FPGAs.)
  14. Chih-Wei Liu; Shih-Hao Ou; Kuo-Chiang Chang; Tzung-Ching Lin; Shin-Kai Chen, 2016, A Low-Error, Cost-Efficient Design Procedure for Evaluating Logarithms to Be Used in a Logarithmic Arithmetic Processor, IEEE Trans. Computers (April 2016) https://ieeexplore.ieee.org/document/7118135 (Algorithms for the initial logarithmic conversion from a floating-point into an LNS representation.)
  15. H. L. Garner, 1965, Number Systems and Arithmetic, in Advances in Computers, Vol. 6, F. L. Alt and M. Rubinoff (eds.), Academic Press, 1965. https://www.sciencedirect.com/science/article/abs/pii/S0065245808604209
  16. N. G. Kingsbury and P. J. W. Rayner, 1971, Digital Filtering Using Logarithmic Arithmetic, Electronics Letters, Vol. 7, pp. 56-58, 1971. https://digital-library.theiet.org/content/journals/10.1049/el_19710039 (Early paper on logarithmic numbers.)
  17. Tso-Bing Juang, Pramod Kumar Meher and Kai-Shiang Jan, 2011, High-Performance Logarithmic Converters Using Novel Two-Region Bit-Level Manipulation Schemes, Proc. of VLSI-DAT (VLSI Symposium on Design, Automation, and Testing), pp. 390-393, April 2011. https://ieeexplore.ieee.org/document/5783555
  18. Tso-Bing Juang, Han-Lung Kuo and Kai-Shiang Jan, 2016, Lower-Error and Area-Efficient Antilogarithmic Converters with Bit-Correction Schemes, Journal of the Chinese Institute of Engineers, Vol. 39, No. 1, pp. 57-63, Jan. 2016. https://www.tandfonline.com/doi/abs/10.1080/02533839.2015.1070692?journalCode=tcie20
  19. Ying Wu, Chuangtao Chen, Weihua Xiao, Xuan Wang, Chenyi Wen, Jie Han, Xunzhao Yin, Weikang Qian, Cheng Zhuo, 2023, A Survey on Approximate Multiplier Designs for Energy Efficiency: From Algorithms to Circuits, ACM Transactions on Design Automation of Electronic Systems, 2023. https://doi.org/10.1145/3610291, https://arxiv.org/abs/2301.12181 (Extensive survey of many approximate multiplication algorithms.)
  20. Patrick Robertson, Emmanuelle Villebrun, Peter Hoeher, et al., 1995, A comparison of optimal and sub-optimal map decoding algorithms operating in the log domain, in IEEE International Conference on Communications, 1995. https://ieeexplore.ieee.org/document/524253
  21. Mark G. Arnold, 2014, LNS References, XLNS Research, http://www.xlnsresearch.com/home.htm (An exhaustive list of LNS research articles up to around 2014.)
  22. N. G. Kingsbury and P .J. W. Rayner, 1971, Digital Filtering Using Logarithmic Arithmetic, Electronics Letters, 7, pp 56-58, 1971, https://www.infona.pl/resource/bwmeta1.element.ieee-art-000004235144
  23. F. Albu; J. Kadlec; N. Coleman; A. Fagan, 2002, The Gauss-Seidel fast affine projection algorithm, IEEE Workshop on Signal Processing Systems, https://ieeexplore.ieee.org/abstract/document/1049694/, PDF: https://www.academia.edu/download/32934948/sips2002.pdf (Simplistic coverage of LNS addition with just exponentiation.)

For more research papers on computational theory for LNS models, see https://www.aussieai.com/research/logarithmic#theory.

Logarithmic Algebra

Papers looking at the mathematical theory of logarithms.

  1. JK Lee, L Mukhanov, AS Molahosseini, 2023, Resource-Efficient Convolutional Networks: A Survey on Model-, Arithmetic-, and Implementation-Level Techniques, https://dl.acm.org/doi/abs/10.1145/3587095, PDF: https://dl.acm.org/doi/pdf/10.1145/3587095
  2. Medium User, 2023, What's the formula to solve summation of logarithms?, Medium, https://math.stackexchange.com/questions/589027/whats-the-formula-to-solve-summation-of-logarithms
  3. Chris Smith, 2021, The Logarithm of a Sum, Mar 9, 2021, https://cdsmithus.medium.com/the-logarithm-of-a-sum-69dd76199790
  4. Daniel E Loeb, 1991, The Interated Logarithmic Algebra, Advances in Mathematics, Volume 86, Issue 2, April 1991, Pages 155-234, https://doi.org/10.1016/0001-8708(91)90041-5
  5. YZ Huang, J Lepowsky, L Zhang, 2006, A logarithmic generalization of tensor product theory for modules for a vertex operator algebra, International Journal of Mathematics, Vol. 17, No. 08, pp. 975-1012 (2006), https://doi.org/10.1142/S0129167X06003758, https://www.worldscientific.com/doi/abs/10.1142/S0129167X06003758
  6. Daniel E Loeb, 1991, The Iterated Logarithmic Algebra. II. Sheffer sequences, Journal of Mathematical Analysis and Applications, Volume 156, Issue 1, 15 March 1991, Pages 172-183, https://doi.org/10.1016/0022-247X(91)90389-H
  7. Yi-Zhi Huang, James Lepowsky, Lin Zhang, Oct 2007, Logarithmic tensor product theory for generalized modules for a conformal vertex algebra, https://arxiv.org/abs/0710.2687
  8. Wikipedia, 2023, List of logarithmic identities, https://en.wikipedia.org/wiki/List_of_logarithmic_identities
  9. Wikipedia, 2023, Logarithmic number system, https://en.wikipedia.org/wiki/Logarithmic_number_system

For more research papers on algebra for LNS models, see https://www.aussieai.com/research/logarithmic#algebra.

LNS Extensions

If you scare easily, might want to look away... but there's an extension of the LNS that's called the “Multi-Dimensional Logarithmic Number System” (MDLNS). Its theory is based on the “Multiple-Base Number System” (MBNS). MDLNS and MBNS have both found some applications in digital signal processing.

Research papers on LNS extensions: List of research papers on advanced LNS issues:

  1. Vassil Dimitrov, Graham Jullien, Roberto Muscedere, 2012, Multiple-Base Number System: Theory and Applications, (Circuits and Electrical Engineering Book 2), Part of: Circuits and Electrical Engineering (2 books), Jan 24, 2012 https://www.amazon.com/Multiple-Base-Number-System-Applications-Engineering-ebook/dp/B00847CSAG/ (General book with a section on MDLNS.)
  2. V. S. Dimitrov, J. Eskritt, L. Imbert, G. A. Jullien, and W. C. Miller, 2001, The use of the multi-dimensional logarithmic number system in DSP applications, in Proc. 15th IEEE Symp. Comput. Arith., Vail, CO, USA, Jun. 2001, pp. 247–254, https://ieeexplore.ieee.org/document/930126
  3. Vassil S. Dimitrov, Graham A. Jullien, Konrad Walus, 2002, Digital filtering using the multidimensional logarithmic number system, Proceedings Volume 4791, Advanced Signal Processing Algorithms, Architectures, and Implementations XII; (2002) https://doi.org/10.1117/12.452047
  4. H. Li; G.A. Jullien; V.S. Dimitrov; M. Ahmadi; W. Miller, 2002, A 2-digit multidimensional logarithmic number system filterbank for a digital hearing aid architecture, 2002 IEEE International Symposium on Circuits and Systems. Proceedings (Cat. No.02CH37353), https://ieeexplore.ieee.org/abstract/document/1011464
  5. R. Muscedere, V. S. Dimitrov, G. A. Jullien, and W. C. Miller. 2002, Efficient conversion from binary to multi-digit multi-dimensional logarithmic number systems using arrays of range addressable look-up tables, Proc. 21st IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP), pages 130-138, 2002. https://ieeexplore.ieee.org/document/1030711
  6. Leila Sepahi, 2012, Improved MDLNS Number System Addition and Subtraction by Use of the Novel Co-Transformation, Masters Thesis, University of Windsor, https://scholar.uwindsor.ca/cgi/viewcontent.cgi?article=1139&context=etd
  7. J.-M. Muller, A. Scherbyna and A. Tisserand, 1998, Semi-Logarithmic Number Systems, IEEE Trans. Computers, vol. 47, No. 2, pp. 145-151, 1998, https://ieeexplore.ieee.org/document/663760 PDF: https://perso.ens-lyon.fr/jean-michel.muller/IEEETC-Fev98.pdf
  8. R Muscedere, 2003, Difficult operations in the multi-dimensional logarithmic number system, Ph.D. Thesis, Electrical and Computer Engineering, University of Windsor, https://scholar.uwindsor.ca/cgi/viewcontent.cgi?article=2741&context=etd
  9. J. Eskritt, R. Muscedere, G.AJullien, V.S.Dimitrov and W.C.Miller, 2000, A 2-digit DBNS filter architecture, IEEE Workshop on Signal Processing, Louisiana, Oct. 2000, https://ieeexplore.ieee.org/document/886743
  10. V.S. Dimitrov, G.A. Jullien and W.C. Miller, Theory and applications of the double-base number system, IEEE Trans. an Computers, vol. 48, No. IO, pp. 1098-1 106, Oct. 1999, https://ieeexplore.ieee.org/document/805158
  11. V.S. Dimitrov, S. Sadeghi-Emamchaie, G.A. Jullien and W.C. Miller, 1996, A near canonic double-base number system with applications in DSP, SPlE Conference on Signal Processing Algorithms, vol. 2846, pp.14-25. 1996, https://doi.org/10.1117/12.255433
  12. G. A. Jullien, V. S. Dimitrov, B. Li, W. C. Miller, A..Lee, and M. Ahmadi, 1999, A Hybrid DBNS Processor for DSP, Computation, Proc. Int. IEEE Symp. Circuits and Systems, Orlando, https://www.researchgate.net/publication/221381797_A_hybrid_DBNS_processor_for_DSP_computation
  13. Ewe, Chun Te, 2009, A new number representation for hardware implementation of DSP algorithms, Ph.D. thesis, Imperial College London, https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.501468 (Dual fixed-point number system; has some LNS content.)
  14. Vassil S. Dimitrov, Graham A. Jullien, Konrad Walus, 2002, Digital filtering using the multidimensional logarithmic number system, Proceedings Volume 4791, Advanced Signal Processing Algorithms, Architectures, and Implementations XII; (2002), International Symposium on Optical Science and Technology, 2002, Seattle, WA, United States, https://doi.org/10.1117/12.452047

For more research papers on LNS extensions, see https://www.aussieai.com/research/logarithmic#extend.

 

Next: Chapter 53. Arithmetic Optimizations

Up: Table of Contents

Buy: Generative AI in C++: Coding Transformers and LLMs

Generative AI in C++ The new AI programming book by Aussie AI co-founders:
  • AI coding in C++
  • Transformer engine speedups
  • LLM models
  • Phone and desktop AI
  • Code examples
  • Research citations

Get your copy from Amazon: Generative AI in C++