Aussie AI

Vocabulary Expansion

  • Last Updated 7 December, 2024
  • by David Spuler, Ph.D.

Vocabulary expansion, or vocabulary extension, is increasing the size of the LLM vocabulary. This means that the overall model has more distinct tokens, which can increase the ability of individual tokens to encode particular states or outputs.

LLM vocabulary expansion can be performed for increased accuracy, such as in foreign languages which have a much greater range of words and symbols (e.g., Unicode and DBCS languages). An increased LLM vocabulary can also be used for improved efficiency, as input sequences may be able to be encoded in fewer tokens, which makes it similar to token merging.

Most of the research on vocabulary expansion is related to foreign language translation via the research area of Neural Machine Translation (NMT). This research has existed for some time prior to much of the LLM research, and often uses non-LLM types of AI models. Hence, there is a need for more research on vocabulary extension with LLMs.

Related areas of LLM inference optimization include:

Research on Vocabulary Expansion

Research papers on increasing the size of the LLM vocabulary in tokenization:

More AI Research

Read more about: