Text Generation
Transformers
Safetensors
qwen3_moe
conversational

DASD-30B-A3B-Thinking-Preview

Ali

GitHub 

Hugging Face 

Hugging Face 

Hugging Face 

Hugging Face 

Model AIME25 LiveCodeBench v6 GPQA-D Average
gpt-oss-20b 91.7 61.0 71.5 74.7
Qwen3-30B-A3B-Thinking-2507 85.0 66.0 73.4 74.8
NVIDIA-Nemotron-3-Nano-30B-A3B 89.1 68.3 73.0 76.8
DASD-30B-A3B-Thinking-Preview (Ours) 86.7 72.8 72.3 77.3

🚀 Introduction

We release DASD-30B-A3B-Thinking-Preview, a highly capable 30B Mixture-of-Experts (MoE) language model specialized in long chain-of-thought (Long-CoT) reasoning across mathematics, code generation, and scientific reasoning. DASD-30B-A3B-Thinking-Preview is post-trained from Qwen3-30B-A3B-Instruct-2507 (non-thinking student) and distilled from gpt-oss-120b (teacher) via our distribution-aligned sequence distillation pipeline.

Note1: To demonstrate the scalability and efficiency of our data recipe, this preview model was trained only on the first-stage (Low-Temperature) dataset (~105K samples) derived from our 4B pipeline, without any re-curation or additional RL. Even with this lightweight recipe, it achieves excellent performance among open MoE models.

Note2: This model (DASD-30B-A3B-Thinking-Preview) is a preliminary research artifact trained only on the first stage (Low-Temperature Sampling) of our pipeline to demonstrate the scalability of our data recipe. For the fully trained model and complete methodology, please refer to DASD-4B-Thinking and our Technical Report.


⚡ Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Alibaba-Apsara/DASD-30B-A3B-Thinking-Preview"

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto",
    trust_remote_code=True,
)

prompt = "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
messages = [
  {"role": "system", "content": "You are a helpful assistant."},
  {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=81920,
)

output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print(content)

Note: We include the system prompt, as it was used during all training stages. To ensure consistent output quality, we recommend including the same system prompt during actual usage; otherwise, the model's responses may be affected.

For deployment, you can use sglang>=0.4.6.post1 or vllm>=0.8.5 or to create an OpenAI-compatible API endpoint:

  • SGLang:
python -m sglang.launch_server --model-path Alibaba-Apsara/DASD-30B-A3B-Thinking-Preview --context-length 262144
  • vLLM:
vllm serve Alibaba-Apsara/DASD-30B-A3B-Thinking-Preview --max-model-len 262144

💡Best Practices

To achieve optimal performance, we suggest using Temperature=1.0, TopP=1.0.

📜 Licence

The model weights are licensed under Apache 2.0 License.

⚠️ Limitation

While DASD-30B-A3B-Thinking-Preview demonstrates remarkable performance across mathematical, scientific, and coding benchmarks, it is currently limited by the absence of tool integration and function calling capabilities. Operating strictly within the text space, the model cannot interact with external interfaces such as code executors or APIs, which constrains its utility in agent-based workflows; however, future iterations aim to bridge this gap by integrating capabilities like knowledge retrieval and tool invocation to support more complex, interactive reasoning tasks.

📚 Citation

DASD-Thinking is developed by Alibaba Cloud, as part of our mission to advance open, efficient, and trustworthy reasoning systems. If you find this work useful in your research or applications, please cite our technical report.

@article{yan2026dasd,
  title={Distribution-Aligned Sequence Distillation for Superior Long-CoT Reasoning},
  author={Yan, Shaotian and Liu, Kaiyuan and Shen, Chen and Wang, Bing and Fan, Sinan and Zhang, Jun and Wu, Yue and Wang, Zheng and Ye, Jieping},
  year={2026},
  journal={arXiv preprint arXiv:2601.09088},
  url={https://arxiv.org/abs/2601.09088}
}
    
@article{liu2025where,
  title={Where Did This Sentence Come From? Tracing Provenance in LLM Reasoning Distillation},
  author={Liu, Kaiyuan and Yan, Shaotian and Miao, Rui and Wang, Bing and Shen, Chen and Zhang, Jun and Ye, Jieping},
  journal={arXiv preprint arXiv:2512.20908},
  year={2025}
}

We welcome collaboration, feedback, and community contributions to push the boundaries of what small models can reason about—transparently and responsibly.

Downloads last month
16
Safetensors
Model size
31B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Alibaba-Apsara/DASD-30B-A3B-Thinking-Preview

Merges
3 models
Quantizations
4 models

Datasets used to train Alibaba-Apsara/DASD-30B-A3B-Thinking-Preview

Collection including Alibaba-Apsara/DASD-30B-A3B-Thinking-Preview

Papers for Alibaba-Apsara/DASD-30B-A3B-Thinking-Preview