Papers
arxiv:2602.06019

Multi-Token Prediction via Self-Distillation

Published on Feb 5
Authors:
,
,
,
,
,

Abstract

A novel online distillation approach converts autoregressive language models into faster multi-token prediction models while maintaining the original implementation and achieving significant speedup with minimal accuracy loss.

AI-generated summary

Existing techniques for accelerating language model inference, such as speculative decoding, require training auxiliary speculator models and building and deploying complex inference pipelines. We consider a new approach for converting a pretrained autoregressive language model from a slow single next token prediction model into a fast standalone multi-token prediction model using a simple online distillation objective. The final model retains the exact same implementation as the pretrained initial checkpoint and is deployable without the addition of any auxiliary verifier or other specialized inference code. On GSM8K, our method produces models that can decode more than 3times faster on average at <5% drop in accuracy relative to single token decoding performance.

Community

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.06019 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.06019 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.