Aussie AI

Small Reasoning Models

  • Last Updated 7 March, 2025
  • by David Spuler, Ph.D.

What are Small Reasoning Models?

Small reasoning models are the combination of reasoning techniques with small language models. Large reasoning models are very expensive to run and the goal is to reduce the cost via a smaller model, but with some loss of accuracy. Small models can be used for two types of reasoning methods: either single-step reasoning or multiple-step inference-based reasoning.

There are two basic approaches to create a Small Reasoning Model (SRM):

  • Start with a Large Reasoning Model (LRM) and reduce its size, or
  • Start with a small model and increase its reasoning capabilities.

Cutting down a Large Reasoning Model to a smaller one may involve:

  • Model compression (e.g. quantization).
  • Distillation focused on reasoning knowledge

In the cases of open-source Large Reasoning Models (e.g. DeepSeek R1), there have already been releases of smaller versions, especially quantized ones.

Adding reasoning capabilities to a small model is particularly interesting to the open-source models world. There are many very capable small models of different sizes, but not many are specifically focused on reasoning. Some ways to go about it include:

  • Multi-step CoT algorithms wrapped around smaller base models.
  • Improved training and fine-tuning of single-step reasoning techniques to enhance a small model.
  • Combination of both approaches is also possible.

Research on Small Reasoning Models

Research papers include:

Reasoning and CoT Efficiency Topics

Blog articles on reasoning efficiency:

More research information on general efficiency optimization techniques for reasoning models:

Efficiency optimizations to Chain-of-Thought include:

More AI Research

Read more about: