Aussie AI

Small Language Models

  • Last Updated 19 March, 2025
  • by David Spuler, Ph.D.

What are Small Language Models?

Small Langage Models (SLMs) are like LLMs, but smaller in terms of the total number of weights and parameters, usually 1B or 2B, but sometimes up to 7B. Small models can be full precision or quantized, but their "smallness" refers to their parameter count, not their memory size after model compression. This means they are less expensive to compute and run faster with a lower latency.

Great progress has been made in training these smaller models with fewer weights to nevertheless offer a great deal of intelligence, albeit artificial. Small models are particularly useful for on-device inference, such as AI phones and AI PCs.

Research on SLMs

Research papers on small language models:

More AI Research

Read more about: