Text Generation
Transformers
Safetensors
qwen3
conversational
text-generation-inference
File size: 9,537 Bytes
0191784
 
 
 
71fdabb
 
8485b8e
0191784
 
f9d0785
2f81473
f9d0785
0191784
 
b73a20c
a729945
0191784
 
 
f9d0785
0191784
 
4587e95
0191784
 
f9d0785
 
 
0191784
 
 
 
 
f9d0785
0191784
 
 
 
 
 
 
8202d88
0191784
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f9d0785
0191784
 
 
 
f9d0785
0191784
f9d0785
0191784
f9d0785
0191784
 
 
 
 
 
 
 
f9d0785
0191784
 
 
 
 
4587e95
0191784
 
 
 
 
a729945
0191784
 
4587e95
0191784
 
 
 
 
 
 
 
f9d0785
0191784
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2f81473
0191784
 
 
 
 
 
f9d0785
0191784
 
 
 
f9d0785
0191784
 
 
 
 
 
 
 
 
2f81473
0191784
 
 
 
 
 
 
 
 
f9d0785
0191784
 
 
5a2bcfb
 
 
a729945
5a2bcfb
 
 
a729945
 
5a2bcfb
 
 
 
 
 
 
 
 
0191784
2f81473
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
datasets:
- Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b
- Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b-Logprob
---

# DASD-4B-Thinking

<img src="assets/dasd-logo.png" alt="Ali" style="vertical-align: middle;">


[![GitHub](https://img.shields.io/badge/GitHub-DASD--Thinking-181717?logo=github&logoColor=white)](https://github.com/D2I-ai/dasd-thinking)&#160;
<a href="https://arxiv.org/abs/2601.09088" target="_blank"><img src="https://img.shields.io/badge/Technical Report-b5212f.svg?logo=arxiv" height="21px"></a>



[![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Checkpoint-DASD--4B--Thinking-yellow)](https://huggingface.co/Alibaba-Apsara/DASD-4B-Thinking)&#160;


[![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Checkpoint-DASD--30B--A3B--Thinking--Preview-yellow)](https://huggingface.co/Alibaba-Apsara/DASD-30B-A3B-Thinking-Preview)&#160;


[![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-Superior--Reasoning--SFT--gpt--oss--120b-red)](https://huggingface.co/datasets/Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b)&#160;

[![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-Superior--Reasoning--SFT--gpt--oss--120b--Logprob-red)](https://huggingface.co/datasets/Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b-Logprob)&#160;



## 🚀 Introduction

We release **DASD-4B-Thinking**, a compact yet capable 4B dense language model specialized in **long chain-of-thought (Long-CoT) reasoning** across mathematics, code generation, and scientific reasoning. DASD-4B-Thinking is post-trained from **Qwen3-4B-Instruct-2507** (non-thinking student) and distilled from **gpt-oss-120b** (teacher) via a **distribution-aligned sequence distillation** pipeline, achieving strong long-cot reasoning performance with substantially fewer training samples (**448K**) than many existing larger models.

<div style="text-align: center;">
  <img src="assets/size_4b-performance-img.jpg" alt="benchmark" style="width: 90%;">
</div>

## 📊 Performance

| Model                           |      Data   | AIME24 | AIME25 | LiveCodeBench v5 | LiveCodeBench v6 | GPQA-D |
|---------------------------------|---------|--------|--------|--------|--------|--------|
| Qwen3-4B-Thinking-2507         |❌          | -      | 81.3   | -      | 55.2   | 65.8   |
| Qwen3-14B                     |❌           | 79.3   | 70.4   | 63.5   | -      | 64.0   |
| Qwen3-32B                     |❌           | 81.4   | 72.9   | 65.7   | -      | 68.4   |
| DeepSeek-R1-0528-Qwen3-8B     |❌          | 86.0   | 76.3   | 60.5   | -      | 61.1   |
| GLM-Z1-32B-0414             |❌            | 80.8   | 63.6   | 59.1   | -      | 66.1   |
| GLM-Z1-9B-0414            |❌              | 76.4   | 56.6   | 51.8   | -      | 58.5   |
| Mistral3-3B                 |❌             | -      | 72.1   | 54.8   | -      | 53.4   |
| Mistral3-8B                  |❌            | -      | 78.7   | 61.6   | -      | 66.8   |
| AM-thinking-v1                  |✅         | 85.3   | 74.4   | 70.3   | -      | -      |
| POLARIS-4B-Preview                |✅       | 81.2   | 79.4   | -      | -      | -      |
| OpenThoughts3-7B                  |✅       | 69.0   | 53.3   | 51.7   | -      | 53.7   |
| Pai-DistillQwen-ThoughtY-4B       |✅       | 76.7   | -      | -      | -      | 56.1   |
| Pai-DistillQwen-ThoughtY-8B         |✅     | 76.7   | -      | -      | -      | 62.1   |
| NVIDIA-OpenReasoning-Nemotron-7B    |✅     | 84.7   | 78.2   | 63.9   | -      | 61.4   |
| NVIDIA-Nemotron-Ultra-253B         |✅     | 80.8   | 72.5   | 68.1   | -      | 76.0   |
| **DASD-4B-Thinking (Ours)**       |✅       | **88.5**   | **83.3**   | **69.3**   | **67.5**   | **68.4**   |


---

## 💡 Why DASD-4B-Thinking Matters

While the community rushes to build distilled reasoning model using massive datasets (often millions of samples), DASD-4B-Thinking proves that *distribution alignment matters more than data quantity*. It establishes a new baseline for **data-efficient distillation**, delivering flagship-level reasoning in a 4B model that can run on consumer hardware.

DASD-4B-Thinking democratizes the training recipe:

* **Open-Source Model**: It achieves State-of-the-Art performance among open-source models of comparable scale and outperforms significantly larger models.


* **Extreme Data Efficiency**: Achieves these results using only **448K training samples**, an order of magnitude fewer than comparable efforts. 

* **Novel pipeline**: It presents a systematic reexamining of sequence-level distillation and introduces **a novel distribution-aligned sequence distillation pipeline**.

* **Open-Source Data**: We release the [Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b](https://huggingface.co/datasets/Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b), allowing the community to reproduce our off-policy **temperature-scheduled** pipeline:

  * **105K** Low-Temperature responses for stability (Stage 1).

  * **330K** High-Temperature responses for diversity (Stage 2).

* **Proven Scalability**: The exact same data recipe generalizes effectively to larger architectures, as demonstrated by our **[DASD-30B-A3B-Thinking-Preview](https://huggingface.co/Alibaba-Apsara/DASD-30B-A3B-Thinking-Preview)** (MoE), which achieves competitive performance without extra RL.



## ⚙️ Post-Training Pipeline

DASD-Thinking introduces a new paradigm of **Distribution-Aligned Sequence Distillation**. This represents an enhanced sequence-level distillation pipeline that incorporates **Temperature-scheduled Learning**, **Divergence-aware Sampling**, and **Mixed-policy Distillation** , achieving efficient capability transfer with a minimal amount of data (**448K**). Please refer to our [report](https://arxiv.org/abs/2601.09088) for more details.

<div style="text-align: center;">
  <img src="assets/pipeline.jpg" alt="DASD-Thinking training pipeline" style="width: 90%;">
</div>

## ⚡ Quick Start


```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Alibaba-Apsara/DASD-4B-Thinking"

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto",
    trust_remote_code=True,
)

prompt = "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
messages = [
  {"role": "system", "content": "You are a helpful assistant."},
  {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=81920,
)

output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print(content)
```


> Note: We include the system prompt, as it was used during all training stages. To ensure consistent output quality, we recommend including the same system prompt during actual usage; otherwise, the model's responses may be affected.

For deployment, you can use sglang>=0.4.6.post1 or vllm>=0.8.5 or to create an OpenAI-compatible API endpoint:

- SGLang:

```
python -m sglang.launch_server --model-path Alibaba-Apsara/DASD-4B-Thinking --context-length 262144
```

- vLLM:
```
vllm serve Alibaba-Apsara/DASD-4B-Thinking --max-model-len 262144
```






## 💡Best Practices

To achieve optimal performance, we suggest using **Temperature=1.0, TopP=1.0**.


## 📜 Licence

The model weights are licensed under Apache 2.0 License.


## ⚠️ Limitation

While DASD-4B-Thinking demonstrates remarkable performance across mathematical, scientific, and coding benchmarks, **it is currently limited by the absence of tool integration and function calling capabilities.** Operating strictly within the text space, the model cannot interact with external interfaces such as code executors or APIs, which constrains its utility in agent-based workflows; however, future iterations aim to bridge this gap by integrating capabilities like knowledge retrieval and tool invocation to support more complex, interactive reasoning tasks.

## 📚 Citation

DASD-Thinking is developed by Alibaba Cloud, as part of our mission to advance open, efficient, and trustworthy reasoning systems. If you find this work useful in your research or applications, please cite our technical report.

```bibtex
@article{yan2026dasd,
  title={Distribution-Aligned Sequence Distillation for Superior Long-CoT Reasoning},
  author={Yan, Shaotian and Liu, Kaiyuan and Shen, Chen and Wang, Bing and Fan, Sinan and Zhang, Jun and Wu, Yue and Wang, Zheng and Ye, Jieping},
  year={2026},
  journal={arXiv preprint arXiv:2601.09088},
  url={https://arxiv.org/abs/2601.09088}
} 
    
@article{liu2025where,
  title={Where Did This Sentence Come From? Tracing Provenance in LLM Reasoning Distillation},
  author={Liu, Kaiyuan and Yan, Shaotian and Miao, Rui and Wang, Bing and Shen, Chen and Zhang, Jun and Ye, Jieping},
  journal={arXiv preprint arXiv:2512.20908},
  year={2025}
}
```

We welcome collaboration, feedback, and community contributions to push the boundaries of what small models can reason about—transparently and responsibly.