Aussie AI
Bitwise AI Applications
-
Book Excerpt from "Generative AI in C++"
-
by David Spuler, Ph.D.
Bitwise AI Applications
Bitwise operations are a well-known coding trick that has been applied to neural network optimization. Bitwise-shifts can be equivalent to multiplication and division, but faster. Other bitwise operators can also be used in various ways in inference algorithms. Some of the common uses of bitwise operators in AI engines include:
- Arithmetic computation speedups: Bit tricks are used in optimizations of multiplication operations with bitshifts, and also faster approximate arithmetic methods.
- Sign bit manipulation: Various optimizations are possible by direct bitwise operations on the sign bit of integers or floating-point numbers. For example, the RELU activation function tests for negatives, which are changed to zero, but positive values are unchanged. This can be implemented efficiently as a sign bit test.
- floating-point bit operations:
The bits of the numeric representations of IEEE 754 floating-point numbers,
or the Google
bfloat16
type, include a sign bit, an exponent, and a mantissa. Normal bitwise arithmetic operators cannot be applied to floating-point numbers, because the C++ bitwise and bitshift operators only work on integer types. However, floating-point numbers are really just integers underneath, so there are various tricky ways that bitwise operators can be used on the underlying IEEE standard bit representations that are used by floating-point numbers. This is discussed in the next chapter. - Look-up Tables: Algorithms that use table lookups for speed improvement typically involve bitwise shifts in computing the table offset.
- Data structures: Some data structures used in optimization of neural networks that involve bits include hashing and Bloom filters.
Bits of AI Research: Some of the advanced areas where bitwise optimizations have been used in neural network research include:
- Power-of-two quantization (bitshift quantization): By quantizing weights to the nearest integer power-of-two, bitwise shifts can replace multiplication.
- Bitserial Operations: Bitserial operations are bitwise operations on all of the bits of an integer or bit vector. For example, the “popcount” operation counts how many 1s are set in the bits of an unsigned integer. The bitserial operations can be useful in neural network inference for computing the vector dot products in binary quantization or 2-bit quantization.
- Advanced number system division: See dyadic numbers and dyadic quantization for an obscure number system involving power-of-two division, which can be implemented as bitwise right-shifting.
- Low-bit integer quantization: When quantized to only a few bits, inference can use bitwise arithmetic and bitserial operations to replace multiply-accumulate. The main examples are binary quantization and ternary quantization, both of which avoid multiplications in favor of bitwise operations (or addition) and sign bit handling.
- Shift-add networks: Multiply-and-add (or “multiply-accumulate”) can be replaced with bitshift-and-add.
- Bit arithmetic neural networks. These are neural networks where the neurons operate as bitwise operations. For example, see Weightless Neural Networks (WNNs).
- XNOR Networks: XNOR neural networks are similar to binarized networks. Their internal operations rely on the bitwise XNOR operation. The idea is that XNOR is actually an implementation of the multiplication operation on binary values. XNOR is an uncommonly used bitwise operation, and there's no builtin C++ operator for binary XNOR. However, there is always hardware XNOR support, such as a 64-bit XNOR instruction in the x86 CPU instruction set.
• Next: • Up: Table of Contents |
The new AI programming book by Aussie AI co-founders:
Get your copy from Amazon: Generative AI in C++ |