Aussie AI

Algorithm Optimization Techniques

  • Book Excerpt from "Generative AI in C++"
  • by David Spuler, Ph.D.

Algorithm Optimization Techniques

AI engines feature a very heavy algorithm running against a monolithic data structure. This chapter presents some of the theory of the general techniques for optimizing algorithms, and subsequent chapters show many ways to use them in your engine.

Changing the underlying algorithms used by the program is often the only real way to gain a large speed increase. In particular, the algorithms and data structures used can often be modified to give a significant speed increase. Is there a better way to do what your program does? Is it doing too much unnecessary calculation? Although much depends on the programmer’s ingenuity, there are some common techniques for improving performance of algorithms.

  • Parallelization and vectorization
  • Precomputation (save time by using space)
  • Recomputation (save space by using time)
  • Caching and computation reuse
  • Greedy algorithms (immediate computation)
  • Skipping algorithms
  • Arithmetic strength reduction
  • Integer arithmetic
  • Change recursion to loops
  • Incremental algorithms
  • Choose a better data structure

The idea of “skipping” computations also has various sub-methods:

  • Lazy algorithms (delay computation until needed)
  • Common case first
  • Simple case first
  • Approximate tests first

 

Next:

Up: Table of Contents

Buy: Generative AI in C++: Coding Transformers and LLMs

Generative AI in C++ The new AI programming book by Aussie AI co-founders:
  • AI coding in C++
  • Transformer engine speedups
  • LLM models
  • Phone and desktop AI
  • Code examples
  • Research citations

Get your copy from Amazon: Generative AI in C++