Aussie AI
Advanced Numeric Bit Representations
-
Book Excerpt from "Generative AI in C++"
-
by David Spuler, Ph.D.
Advanced Numeric Bit Representations
Although much research focuses on 16-bit floating-point or integer representations of weights, there are some more novel and interesting alternatives. Programmers are used to the current ways that computers store numbers into bits, but these are just traditions, and arose via various trade-offs that may no longer apply with modern increases in computing power. Another type of bit representation that may be important is Posit numbers (see below). None of these newer bit arrangement techniques seem to be in widespread usage yet, but given the success of bfloat16, it's possible that a significant breakthrough still lies in this area of research.
Some of the research papers on floating-point alternative representations include:
- Peter Lindstrom, Scott Lloyd, Jeffrey Hittinger, March 28th, 2018, Universal Coding of the Reals: Alternatives to IEEE Floating-Point, CoNGA, https://dl.acm.org/doi/10.1145/3190339.3190344
- Jeff Johnson, November 2018, "Making floating-point math highly efficient for AI hardware", Meta (Facebook Research), https://engineering.fb.com/2018/11/08/ai-research/floating-point-math/
- G Alsuhli, V Sakellariou, H Saleh, M Al-Qutayri, 2023, Number Systems for Deep Neural Network Architectures: A Survey, https://arxiv.org/abs/2307.05035
For more research papers on the bit representations for floating-point (or integers), see also https://www.aussieai.com/research/advanced-ai-mathematics#bits.
• Next: • Up: Table of Contents |
The new AI programming book by Aussie AI co-founders:
Get your copy from Amazon: Generative AI in C++ |