Aussie AI

Prompt Tuning

  • Last Updated 27 February, 2025
  • by David Spuler, Ph.D.

Prompt tuning is an LLM optimization that adds special "prompt tokens" to the input sequence. The goals of prompt tuning are similar to fine-tuning, but with improved efficiency compared with adjusting all the model parameters. Thus it is analogous to Parameter-Efficent Fine-Tuning (PEFT), including LoRA, but it works in a completely different way.

Note that the term "prompt tuning" is sometimes used in its general meaning, which means tuning of prompts. In this case, it is referring to automatic prompt optimization, or "programmatic prompting."

Research on Prompt Tuning

More AI Research

Read more about: