Files changed (1) hide show
  1. README.md +19 -101
README.md CHANGED
@@ -1,110 +1,28 @@
1
  ---
 
 
2
  license: apache-2.0
3
- library_name: transformers
 
 
 
 
4
  ---
5
-
6
- # SDAR
7
-
8
- <div align="center">
9
- <img src="https://raw.githubusercontent.com/JetAstra/SDAR/main/assets/SDAR_doc_head.png">
10
-
11
-
12
- <div>&nbsp;</div>
13
-
14
- [Arxiv](https://arxiv.org/abs/2510.06303) • [💻Github Repo](https://github.com/JetAstra/SDAR) • [🤗Model Collections](https://huggingface.co/collections/JetLM/sdar-689b1b6d392a4eeb2664f8ff)
15
-
16
- </div>
17
-
18
- # Introduction
19
-
20
- **SDAR** (**S**ynergy of **D**iffusion and **A**uto**R**egression) model is a new large language model that integrates autoregressive (AR) and discrete diffusion modeling strategies. It combines the efficient training paradigm of AR models with the highly parallel inference capability of diffusion models, while delivering performance fully on par with SOTA open-source AR models. At the same time, SDAR sets a new benchmark as the most powerful diffusion language model to date. We highlight three major conclusions from our study:
21
-
22
  > [!IMPORTANT]
23
- > Take-home message
24
- >
25
- > - **Balanced Efficiency:** SDAR unifies the **efficient training** of AR models with the **parallel inference** of diffusion, achieving both fast training and inference.
26
- > - **Fair Comparisons:** In rigorously controlled experiments, SDAR achieves **on-par general task performance** with strong AR baselines, ensuring credibility and reproducibility.
27
- > - **Superior Learning Efficiency:** On complex scientific reasoning tasks (e.g., GPQA, ChemBench, Physics), SDAR shows **clear gains over AR models** of the same scale, approaching or even exceeding leading closed-source systems.
28
-
29
- # Inference
30
-
31
- ## Using the tailored inference engine [JetEngine](https://github.com/Labman42/JetEngine)
32
 
33
- JetEngine enables more efficient inference compared to the built-in implementation.
34
-
35
- ```bash
36
- git clone https://github.com/Labman42/JetEngine.git
37
- cd JetEngine
38
- pip install .
39
  ```
40
-
41
- The following example shows how to quickly load a model with JetEngine and run a prompt end-to-end.
42
-
43
- ```python
44
- import os
45
- from jetengine import LLM, SamplingParams
46
- from transformers import AutoTokenizer
47
-
48
- model_path = os.path.expanduser("/path/to/your/sdar-model")
49
- tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
50
- # Initialize the LLM
51
- llm = LLM(
52
- model_path,
53
- enforce_eager=True,
54
- tensor_parallel_size=1,
55
- mask_token_id=151669, # Optional: only needed for masked/diffusion models
56
- block_length=4
57
- )
58
-
59
- # Set sampling/generation parameters
60
- sampling_params = SamplingParams(
61
- temperature=1.0,
62
- topk=0,
63
- topp=1.0,
64
- max_tokens=256,
65
- remasking_strategy="low_confidence_dynamic",
66
- block_length=4,
67
- denoising_steps=4,
68
- dynamic_threshold=0.9
69
- )
70
-
71
- # Prepare a simple chat-style prompt
72
- prompt = tokenizer.apply_chat_template(
73
- [{"role": "user", "content": "Explain what reinforcement learning is in simple terms."}],
74
- tokenize=False,
75
- add_generation_prompt=True
76
- )
77
-
78
- # Generate text
79
- outputs = llm.generate_streaming([prompt], sampling_params)
80
  ```
81
 
82
- # Performance
83
-
84
- ### SDAR v.s. Qwen
85
-
86
- For **SDAR** models, inference hyperparameters are set to: `block_length = 4`, `denoising_steps = 4`, greedy decoding.
87
-
88
- For **Qwen3-1.7B-AR-SFT** and **Qwen3-30B-AR-SFT**, we use *greedy decoding*, and the base models **Qwen3-1.7B-Base** and **Qwen3-30B-Base** are derived from the [Qwen3 Technical Report](https://arxiv.org/abs/2505.09388).
89
-
90
- <p align="center">
91
- <img src="https://raw.githubusercontent.com/JetAstra/SDAR/main/assets/table1.png" style="max-width:100%; height:auto;">
92
- <p align="center">
93
-
94
- ### SDAR-Sci v.s. AR Baseline
95
-
96
- This table presents a **controlled comparison** between AR and SDAR under the same backbone and dataset settings.
97
- The results are averaged over 8 runs for GPQA, and over 32 runs each for AIME 2024, AIME 2025, and LiveMathBench.
98
-
99
- <p align="center">
100
- <img src="https://raw.githubusercontent.com/JetAstra/SDAR/main/assets/table2.png" style="max-width:100%; height:auto;">
101
- <p align="center">
102
-
103
- #### SDAR-Sci v.s. Other Models
104
-
105
- This table positions **SDAR-30B-A3B-Sci(sample)** against leading open-source and closed-source LLMs.
106
- Scores for external models are sourced from the [InternLM/Intern-S1](https://github.com/InternLM/Intern-S1) repository.
107
 
108
- <p align="center">
109
- <img src="https://raw.githubusercontent.com/JetAstra/SDAR/main/assets/table3.png" style="max-width:100%; height:auto;">
110
- <p align="center">
 
1
  ---
2
+ name: SDAR-1.7B-Chat
3
+ base_model: JetLM/SDAR-1.7B-Chat
4
  license: apache-2.0
5
+ pipeline_tag: text-generation
6
+ tasks :
7
+ - text-generation
8
+ - text-to-text-generation
9
+ language: en
10
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  > [!IMPORTANT]
12
+ > Original Model Link : [https://huggingface.co/JetLM//SDAR-1.7B-Chat](https://huggingface.co/JetLM/SDAR-1.7B-Chat)
13
+ >
 
 
 
 
 
 
 
14
 
 
 
 
 
 
 
15
  ```
16
+ name: SDAR-1.7B-Chat
17
+ base_model: JetLM/SDAR-1.7B-Chat
18
+ license: apache-2.0
19
+ pipeline_tag: text-generation
20
+ tasks :
21
+ - text-generation
22
+ - text-to-text-generation
23
+ language: en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ```
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
+ # SDAR-1.7B-Chat
28
+ SDAR-1.7B-Chat is a diffusion language model. This is a fork for compatibility support beyond flash attention and CUDA.