Aussie AI
Reducing Build Time
-
Book Excerpt from "Generative AI in C++"
-
by David Spuler, Ph.D.
Reducing Build Time
The build phase of a large piece of software like an AI engine is a significant time cost, and can become a bottleneck to the development process. If you're using CI/CD then a new build kicks off every time you commit code. If there's a lot of team members, there's regular commits, and many daily builds. So, the build time becomes an important productivity measure.
In fact, the builds can get too long and leave programmers waiting on the automated acceptance testing results after their commits. Builds can also start queueing up if you're not careful. This can happen if builds are too long, or if the team is so large that there's an endless stream of commits. You might want to instigate a process whereby there are small builds for automated approvals and immediate failure feedback on commits, but a much bigger “nightly build” which runs all the biggest test suites, compiles on multiple platforms, gathers compiler warnings and static analysis results, reports on test coverage computations, and all of the other time-intensive automatic testing.
Reducing Compile Time. Reducing compile-time is a small method of improving the programmer’s use of time. It can be more important than reducing overall build time, because coders are usually doing incremental compiles within their area of focus, rather than a full-blown build. Programmers need to re-compile over and over all day long whenever they're debugging.
Modern C++ compilers are incredible and can crank through huge amounts of source code. Although the speed of compilation largely depends on the ingenuity of the implementor of your compiler, there are a few techniques that can make your programs compile more quickly.
- Turn down the optimization settings.
- Use precompiled headers.
- Block re-included header files (i.e.
#ifdef
macros or “#pragma once
”).
Some compilers support an option called “precompiled headers” whereby the compiler stores the state of internal tables, such as the symbol table, in a data file. Instead of then processing the text in the header files the compiler simply loads the data file and ignores the header files. This saves the compile-time used in processing the declarations in header files.
Modularity for Incremental Builds. The best method of reducing compile-time during the testing-debugging phase of program development is to break the program into a large number of small C++ source files, or smaller modularized libraries. In this way, only the files that need to be recompiled into object files are processed in an incremental rebuild, although all object files are still linked in creating the final executable. And the use of multiple files and libraries is also good programming style, which is a bonus.
The method of
achieving this automatic incremental rebuilding of object files depends on the environment.
Personally, I am addicted to the “make
” utility on Linux (e.g. with “makedepend
”),
whereas MSVS has incremental builds largely automated in the C++ IDE on Windows.
You might also prefer a more sophisticated build tool like CMake, Jenkins, or Gradle.
On the other hand, how much time have I wasted debugging a bug fix that didn't work properly, only to find it hadn't been rebuilt properly? Nothing beats a full rebuild.
• Next: • Up: Table of Contents |
The new AI programming book by Aussie AI co-founders:
Get your copy from Amazon: Generative AI in C++ |