Update README.md
Browse filesupdate the url of the technical report
README.md
CHANGED
|
@@ -12,24 +12,19 @@ datasets:
|
|
| 12 |
|
| 13 |
|
| 14 |
[](https://github.com/D2I-ai/dasd-thinking) 
|
| 15 |
-
|
| 16 |
|
| 17 |
|
| 18 |
|
| 19 |
[](https://huggingface.co/Alibaba-Apsara/DASD-4B-Thinking) 
|
| 20 |
-
[](https://www.modelscope.cn/models/Alibaba-Apsara/DASD-4B-Thinking) 
|
| 21 |
|
| 22 |
|
| 23 |
[](https://huggingface.co/Alibaba-Apsara/DASD-30B-A3B-Thinking-Preview) 
|
| 24 |
-
[](https://www.modelscope.cn/models/Alibaba-Apsara/DASD-30B-A3B-Thinking-Preview) 
|
| 25 |
-
|
| 26 |
|
| 27 |
|
| 28 |
[](https://huggingface.co/datasets/Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b) 
|
| 29 |
-
[](https://www.modelscope.cn/datasets/Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b) 
|
| 30 |
|
| 31 |
[](https://huggingface.co/datasets/Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b-Logprob) 
|
| 32 |
-
[](https://www.modelscope.cn/datasets/Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b-Logprob) 
|
| 33 |
|
| 34 |
|
| 35 |
| Model | AIME25 | LiveCodeBench v6 | GPQA-D | Average |
|
|
@@ -46,7 +41,7 @@ We release DASD-30B-A3B-Thinking-Preview, a highly capable 30B Mixture-of-Expert
|
|
| 46 |
|
| 47 |
> Note1: To demonstrate the scalability and efficiency of our data recipe, this preview model was trained only on the first-stage (Low-Temperature) dataset (~105K samples) derived from our 4B pipeline, without any re-curation or additional RL. Even with this lightweight recipe, it achieves excellent performance among open MoE models.
|
| 48 |
|
| 49 |
-
> Note2: This model (DASD-30B-A3B-Thinking-Preview) is a preliminary research artifact trained only on the first stage (Low-Temperature Sampling) of our pipeline to demonstrate the scalability of our data recipe. For the fully trained model and complete methodology, please refer to [DASD-4B-Thinking](https://huggingface.co/Alibaba-Apsara/DASD-4B-Thinking) and our [Technical Report](https://
|
| 50 |
|
| 51 |
|
| 52 |
|
|
@@ -136,12 +131,13 @@ While DASD-30B-A3B-Thinking-Preview demonstrates remarkable performance across m
|
|
| 136 |
DASD-Thinking is developed by Alibaba Cloud, as part of our mission to advance open, efficient, and trustworthy reasoning systems. If you find this work useful in your research or applications, please cite our technical report.
|
| 137 |
|
| 138 |
```bibtex
|
| 139 |
-
@
|
| 140 |
title={Distribution-Aligned Sequence Distillation for Superior Long-CoT Reasoning},
|
| 141 |
author={Yan, Shaotian and Liu, Kaiyuan and Shen, Chen and Wang, Bing and Fan, Sinan and Zhang, Jun and Wu, Yue and Wang, Zheng and Ye, Jieping},
|
| 142 |
year={2026},
|
| 143 |
-
|
| 144 |
-
}
|
|
|
|
| 145 |
|
| 146 |
@article{liu2025where,
|
| 147 |
title={Where Did This Sentence Come From? Tracing Provenance in LLM Reasoning Distillation},
|
|
|
|
| 12 |
|
| 13 |
|
| 14 |
[](https://github.com/D2I-ai/dasd-thinking) 
|
| 15 |
+
<a href="https://arxiv.org/abs/2601.09088" target="_blank"><img src="https://img.shields.io/badge/Technical Report-b5212f.svg?logo=arxiv" height="21px"></a>
|
| 16 |
|
| 17 |
|
| 18 |
|
| 19 |
[](https://huggingface.co/Alibaba-Apsara/DASD-4B-Thinking) 
|
|
|
|
| 20 |
|
| 21 |
|
| 22 |
[](https://huggingface.co/Alibaba-Apsara/DASD-30B-A3B-Thinking-Preview) 
|
|
|
|
|
|
|
| 23 |
|
| 24 |
|
| 25 |
[](https://huggingface.co/datasets/Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b) 
|
|
|
|
| 26 |
|
| 27 |
[](https://huggingface.co/datasets/Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b-Logprob) 
|
|
|
|
| 28 |
|
| 29 |
|
| 30 |
| Model | AIME25 | LiveCodeBench v6 | GPQA-D | Average |
|
|
|
|
| 41 |
|
| 42 |
> Note1: To demonstrate the scalability and efficiency of our data recipe, this preview model was trained only on the first-stage (Low-Temperature) dataset (~105K samples) derived from our 4B pipeline, without any re-curation or additional RL. Even with this lightweight recipe, it achieves excellent performance among open MoE models.
|
| 43 |
|
| 44 |
+
> Note2: This model (DASD-30B-A3B-Thinking-Preview) is a preliminary research artifact trained only on the first stage (Low-Temperature Sampling) of our pipeline to demonstrate the scalability of our data recipe. For the fully trained model and complete methodology, please refer to [DASD-4B-Thinking](https://huggingface.co/Alibaba-Apsara/DASD-4B-Thinking) and our [Technical Report](https://arxiv.org/abs/2601.09088).
|
| 45 |
|
| 46 |
|
| 47 |
|
|
|
|
| 131 |
DASD-Thinking is developed by Alibaba Cloud, as part of our mission to advance open, efficient, and trustworthy reasoning systems. If you find this work useful in your research or applications, please cite our technical report.
|
| 132 |
|
| 133 |
```bibtex
|
| 134 |
+
@article{yan2026dasd,
|
| 135 |
title={Distribution-Aligned Sequence Distillation for Superior Long-CoT Reasoning},
|
| 136 |
author={Yan, Shaotian and Liu, Kaiyuan and Shen, Chen and Wang, Bing and Fan, Sinan and Zhang, Jun and Wu, Yue and Wang, Zheng and Ye, Jieping},
|
| 137 |
year={2026},
|
| 138 |
+
journal={arXiv preprint arXiv:2601.09088},
|
| 139 |
+
url={https://arxiv.org/abs/2601.09088}
|
| 140 |
+
}
|
| 141 |
|
| 142 |
@article{liu2025where,
|
| 143 |
title={Where Did This Sentence Come From? Tracing Provenance in LLM Reasoning Distillation},
|