GRIFFIN: Effective Token Alignment for Faster Speculative Decoding

This repository contains the GRIFFIN draft model based on Llama 3-8B Instruct as presented in the paper GRIFFIN: Effective Token Alignment for Faster Speculative Decoding.

For the full project, including code and training instructions, visit the GitHub repository: https://github.com/hsj576/GRIFFIN

Abstract

Speculative decoding accelerates inference in large language models (LLMs) by generating multiple draft tokens simultaneously. However, existing methods often struggle with token misalignment between the training and decoding phases, limiting their performance. To address this, we propose GRIFFIN, a novel framework that incorporates a token-alignable training strategy and a token-alignable draft model to mitigate misalignment. The training strategy employs a loss masking mechanism to exclude highly misaligned tokens during training, preventing them from negatively impacting the draft model's optimization. The token-alignable draft model introduces input tokens to correct inconsistencies in generated features. Experiments on LLaMA, Vicuna, Qwen and Mixtral models demonstrate that GRIFFIN achieves an average acceptance length improvement of over 8% and a speedup ratio exceeding 7%, outperforming current speculative decoding state-of-the-art methods.

Overview and Features

GRIFFIN is a novel framework designed to address token misalignment in speculative decoding. This repository provides the implementation of GRIFFIN, including its token-alignable training strategy and token-alignable draft model.

  • GRIFFIN is:
    • 4.2x faster than vanilla decoding.
    • 1.3x faster than EAGLE-2.

Acceleration demo of GRIFFIN for llama3-8B in a 4090GPU

demogif

Performance Benchmarks

Speed up ratios of GRIFFIN when temperature = 0.

benchmark

Speed up ratios of GRIFFIN when temperature = 1.

benchmark

Inference with Code

You can use the provided eagenerate function from the EaModel class for accelerated generation, similar to using the generate method from Hugging Face Transformers.

import torch
from model.ea_model_griffin import EaModel
from fastchat.model import get_conversation_template

# Ensure base_model_path points to the original LLM and ea_model_path to the GRIFFIN draft model
base_model_path = "meta-llama/Meta-Llama-3-70B-Instruct"
EAGLE_model_path = "husj576/GRIFFIN-llama3-instruct-70B" # This model

# Load the GRIFFIN enhanced model
model = EaModel.from_pretrained(
    base_model_path=base_model_path,
    ea_model_path=EAGLE_model_path,
    torch_dtype=torch.float16,
    low_cpu_mem_usage=True,
    device_map="auto",
    total_token=-1 # Automatically configure total_token
)
model.eval()

your_message="Hello, how are you today?"

# Use the correct chat template for the base model (e.g., Llama-3.1-Instruct)
# The GitHub example uses "vicuna", but "llama-3" would be more appropriate for Llama-3.1-8B-Instruct.
# Please refer to `fastchat.model.get_conversation_template` for available templates.
conv = get_conversation_template("llama3")
conv.append_message(conv.roles[0], your_message)
conv.append_message(conv.roles[1], None) # Append an empty assistant message to prompt generation
prompt = conv.get_prompt()

# Tokenize the prompt
input_ids = model.tokenizer([prompt]).input_ids
input_ids = torch.as_tensor(input_ids).cuda()

# Generate output using eagenerate
output_ids = model.eagenerate(input_ids, temperature=0.5, max_new_tokens=512)

# Decode and print the generated text
output = model.tokenizer.decode(output_ids[0])
print(output)

Citation

If you find this repository helpful, please cite our paper:

@misc{hu2025griffineffectivetokenalignment,
      title={GRIFFIN: Effective Token Alignment for Faster Speculative Decoding}, 
      author={Shijing Hu and Jingyang Li and Xingyu Xie and Zhihui Lu and Kim-Chuan Toh and Pan Zhou},
      year={2025},
      eprint={2502.11018},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.11018}, 
}
Downloads last month
-
Safetensors
Model size
3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for husj576/GRIFFIN-llama3-instruct-70B