Abstract
ConFu is a novel speculative decoding framework that enhances draft model performance by enabling future-oriented generation prediction through contemplate tokens and soft prompts, achieving improved token acceptance rates and faster inference speeds.
Speculative decoding has emerged as a powerful approach to accelerate large language model (LLM) inference by employing lightweight draft models to propose candidate tokens that are subsequently verified by the target model. The effectiveness of this paradigm critically depends on the quality of the draft model. While recent advances such as the EAGLE series achieve state-of-the-art speedup, existing draft models remain limited by error accumulation: they condition only on the current prefix, causing their predictions to drift from the target model over steps. In this work, we propose ConFu (Contemplate the Future), a novel speculative decoding framework that enables draft models to anticipate the future direction of generation. ConFu introduces (i) contemplate tokens and soft prompts that allow the draft model to leverage future-oriented signals from the target model at negligible cost, (ii) a dynamic contemplate token mechanism with MoE to enable context-aware future prediction, and (iii) a training framework with anchor token sampling and future prediction replication that learns robust future prediction. Experiments demonstrate that ConFu improves token acceptance rates and generation speed over EAGLE-3 by 8--11% across various downstream tasks with Llama-3 3B and 8B models. We believe our work is the first to bridge speculative decoding with continuous reasoning tokens, offering a new direction for accelerating LLM inference.
Community
We propose a new speculative decoding method improving EAGLE-3 performance by 8-11% by using future intent of target model. This future intent (thought/contemplate) is used as an extra conditioning by draft model to predict better aligned draft tokens.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DFlash: Block Diffusion for Flash Speculative Decoding (2026)
- DART: Diffusion-Inspired Speculative Decoding for Fast LLM Inference (2026)
- PRISM: Parametrically Refactoring Inference for Speculative Sampling Draft Models (2026)
- MARS: Unleashing the Power of Speculative Decoding via Margin-Aware Verification (2026)
- Make Every Draft Count: Hidden State based Speculative Decoding (2026)
- Training-free Dropout Sampling for Semantic Token Acceptance in Speculative Decoding (2026)
- HIPPO: Accelerating Video Large Language Models Inference via Holistic-aware Parallel Speculative Decoding (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper