File size: 2,744 Bytes
a1d8121
 
 
 
 
 
 
 
 
 
81815b0
 
a1d8121
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---

# Jacobi Forcing: Fast and Accurate Causal Parallel Decoding

This repository contains the `JacobiForcing_Coder_7B_v1` model, presented in the paper [Fast and Accurate Causal Parallel Decoding using Jacobi Forcing](https://huggingface.co/papers/2512.14681).

Base Model: [Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct)

Training Data (Jacobi trajectories): https://huggingface.co/datasets/JacobiForcing/OpenCodeInstruct_training_data_n32

Jacobi Forcing is a novel training technique that converts Large Language Models (LLMs) into native causal parallel decoders. This approach maintains the causal autoregressive backbone and addresses the AR-to-diffusion mismatch by training the model to handle noisy future blocks along its own Jacobi decoding trajectories.

It achieves up to $4.5\times$ higher tokens-per-forward and $4\times$ wall-clock speedup on coding and math tasks, while retaining near-AR generation quality.

You can find more details on the project blog: [Jacobi Forcing Blog](https://hao-ai-lab.github.io/blogs/jacobi-forcing/)
The official code repository is available here: [GitHub Repository](https://github.com/hao-ai-lab/JacobiForcing)

## Usage

You can try the chatbot demo locally or use the provided Python inference code.

### Local Chatbot Demo
```bash
# modify the script to use your local path
streamlit run applications/jacobi_model_chat.py
```

### Inference with Code
You can use our provided `eagenerate` function for speedup generation, similar to using `generate` from Hugging Face. Here is an example:

```python
from eagle.model.ea_model import EaModel
from fastchat.model import get_conversation_template
import torch

# Assuming base_model_path and EAGLE_model_path are defined
# For example:
base_model_path = "Qwen/Qwen2.5-Coder-7B-Instruct"
EAGLE_model_path = "JacobiForcing/JacobiForcing_Coder_7B_v1" # Or your local path to the weights

model = EaModel.from_pretrained(
    base_model_path=base_model_path,
    ea_model_path=EAGLE_model_path,
    torch_dtype=torch.float16,
    low_cpu_mem_usage=True,
    device_map="auto",
    total_token=-1 # Automatically configure draft tokens
)
model.eval()

your_message="Hello"
conv = get_conversation_template("vicuna") # Use appropriate conversation template for your base model
conv.append_message(conv.roles[0], your_message)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()

input_ids = model.tokenizer([prompt]).input_ids
input_ids = torch.as_tensor(input_ids).cuda()

output_ids = model.eagenerate(input_ids, temperature=0.5, max_new_tokens=512)
output = model.tokenizer.decode(output_ids[0])
print(output)
```