Aussie AI

Model Compression

  • Last Updated 3 December, 2024
  • by David Spuler, Ph.D.

Model compression is the general class of AI optimizations that reduce the size of the model. The goal is two-fold: (a) size reduction: have a smaller model that uses less memory storage, and (b) latency optimization: run faster inference on the more compact model.

Model compression techniques have been highly successful and are widely used, second only to hardware-acceleration in their impact on the AI industry. The main model compression techniques are:

There are various lesser-known types of model compression methods:

Survey Papers on Model Compression

General surveys that cover model compression include:

Research on Model Compression (Generally)

Research papers on model compression:

KV Caching and Model Compression

There are several analogous model compression optimizations for KV cache data. Read more about these KV cache research areas:

Data Compression

Data compression refers to the use of existing streaming type bit compression algorithms to make LLMs smaller. This refers to methods such as:

  • Huffman coding
  • Run-length encoding
  • LZW compression
  • Zip file formats

One particular example where compression is highly relevant is sparse models. For example, run-length encoding can track the number of zeros between non-zero values.

Research papers on data compression with LLMs:

More Model Compression Research

Read more about: