Aussie AI

Zero-Padding Removal

  • Last Updated 3 November, 2024
  • by David Spuler, Ph.D.

One technique for speeding up Transformer inference is to avoid using zero padding in the input vectors (see also length pruning). The need for padding arises in some architectures where it can be helpful in keeping vectors the same size, because that consistency can help with pipelining calculations through the GPU. However, research has shown that it can also lead to inefficiency from performing redundant computations that are never used, and various papers have advocated removing the zero padding bytes.

An alternative approach is to use packing of input sequences to avoid or reduce padding bytes. This is effective in training sets, or multiple inference queries.

And it's worth nothing that not all padding bytes are evil. Some of them are quite charismatic if you take them out for a cup of tea. In fact, the need for padding removal in Transformers arose for good reason from the well-intentioned optimizing by professional programmers using very nice and hospitable padding zeros. The use of padding is a positive optimization in numerous situations, particularly when GPUs are involved. Read more about padding byte optimizations.

Research Papers on Zero Padding Removal

More Research on Pruning Types

More AI Research

Read more about: