Aussie AI
Post-Optimization Fine-Tuning (POFT)
-
Last Updated 30 December, 2024
-
by David Spuler, Ph.D.
Post-Optimization Fine-Tuning (POFT) is model fine-tuning that is needed after certain model compression optimizations, such as quantization or pruning. The idea is that model compression somewhat reduces the model's accuracy by removing some neuron links, so extra fine-tuning is needed to compensate for this method. However, there are now various model compression methods that don't need additional fine-tuning. Note that POFT should not be confused with Parameter-Efficient Fine Tuning (PEFT).
Research on POFT
The need for fine-tuning after various model optimizations is so standard that it is not often considered in detail as a standalone issue by AI research papers. Nevertheless, this use of fine-tuning has some specific factors, and there are some papers with further analysis of POFT:
- Miles Williams, George Chrysostomou, Nikolaos Aletras, 22 Oct 2024, Self-calibration for Language Model Quantization and Pruning, https://arxiv.org/abs/2410.17170
- Jiun-Man Chen, Yu-Hsuan Chao, Yu-Jie Wang, Ming-Der Shieh, Chih-Chung Hsu, Wei-Fen Lin, 11 Mar 2024, QuantTune: Optimizing Model Quantization with Adaptive Outlier-Driven Fine Tuning, https://arxiv.org/abs/2403.06497 (Outlier-correcting fine-tuning and quantization method.)
- Deus Ex Machina, Dec 2024, Overview of Post-training Quantization and Examples of Algorithms and Implementations, https://deus-ex-machina-ism.com/?p=62443
- Kyle Wiggers, December 23, 2024, A popular technique to make AI more efficient has drawbacks, https://techcrunch.com/2024/12/23/a-popular-technique-to-make-ai-more-efficient-has-drawbacks/
More AI Research
Read more about: