Aussie AI

Attention Head Pruning Research

  • Last Updated 2 November, 2024
  • by David Spuler, Ph.D.

Attention head pruning, often simply abbreviated to "head pruning", is structured pruning that removes attention heads. It is a type of "width pruning" that makes the network "thinner". The attention heads were one of the main advances in the seminal 2017 Transformer paper, but research has shown that the attention mechanism is expensive and there are various ways to optimize its efficiency, including removing some redundant attention heads.

In addition to head pruning techniques that remove redundant or under-utilized attention heads, there is also research into using simpler attention heads (see approximate attention heads) and simplifying the cost of attention on long sequences (see non-autoregression architectures). There is also research more generally into optimized Transformer architectures.

Head pruning can be combined with various other optimization techniques such as quantization. It is also orthogonal to "depth pruning" such as "layer pruning" and "early exit", and combined depth/width pruning is possible.

Attention Head Pruning Research Papers

Research papers on head pruning:

More Research on Pruning Types

More AI Research

Read more about: