Aussie AI
Bit Vectors
-
Book Excerpt from "Generative AI in C++"
-
by David Spuler, Ph.D.
Bit Vectors
Bit vectors are conceptually an array of N bits with 0 or 1 values. The term “bit set” is almost synonymous, but has a slightly different meaning. A bit vector maps a number at the index position to its binary bit value, whereas a bit set specifies whether a number is in a set of numbers. Both interpretations are valid, depending mostly on the application, and the underlying implementation of the data structure is almost identical.
In AI applications, a bit vector may represent a set of weights with 0 or 1 values, such as with binary quantization or XNOR neural networks. The operation of vector dot product on two bit vectors can be performed arithmetically using bitwise arithmetic.
Sparsity optimizations are another application of bit vectors. Pruning can often create “sparse” weight matrices, with lots of zeros and very few non-zero weights. A bit vector can then efficiently represent whether a weight in a vector has a non-zero value, which is then used to avoid doing any computations on zero values. An alternative to bit vectors for sparsity is to use permutation arrays of indices, as discussed further below.
Another application of bit vectors occurs in Bloom filter data structures, which are a probabilistic hybrid of hash tables and bit vectors. In this usage, a bit set represents whether an input number is found in the set of already-mapped numbers.
In practice, bit vectors or bit sets are often implemented as arrays of unsigned integers, with the bits packed into each integer. If the underlying unsigned type is 32-bits or 64-bits, then many bitwise operations on bit vectors can be performed 32 or 64 bits at a time, achieving significant parallelism without using any form of hardware acceleration beyond basic CPU instructions. Use of AVX SIMD instructions can then further vectorize many operations without a GPU. But it absolutely flies if you use a GPU with bit vectors or bit sets, because that's two levels of parallelization.
There are several pre-built C++ bit set classes that can be considered:
std::bitset<N>
(in<bitset>
)std::vector<bool>
boost::dynamic_bitset<>
If the maximum size of the bit vector is known at compile-time,
which is often the case with AI models,
then std::bitset
is a good choice.
If not, then std::vector<bool>
or boost::dynamic_bitset<>
are good choices
for dynamic-sized bit vectors.
Alternatively, you can build your own bit vectors, if there is a particular need
to hand-code them
or if you just want some fun.
• Next: • Up: Table of Contents |
The new AI programming book by Aussie AI co-founders:
Get your copy from Amazon: Generative AI in C++ |